Overview

Brought to you by YData

Dataset statistics

Number of variables29
Number of observations1056
Missing cells5262
Missing cells (%)17.2%
Duplicate rows0
Duplicate rows (%)0.0%
Total size in memory247.5 KiB
Average record size in memory240.0 B

Variable types

Text11
Numeric7
Categorical5
DateTime6

Alerts

lang has constant value "Python"Constant
ext is highly imbalanced (98.1%)Imbalance
max_stars_repo_licenses is highly imbalanced (57.1%)Imbalance
max_issues_repo_licenses is highly imbalanced (57.0%)Imbalance
max_forks_repo_licenses is highly imbalanced (57.0%)Imbalance
max_stars_count has 490 (46.4%) missing valuesMissing
max_stars_repo_stars_event_min_datetime has 490 (46.4%) missing valuesMissing
max_stars_repo_stars_event_max_datetime has 490 (46.4%) missing valuesMissing
max_issues_count has 650 (61.6%) missing valuesMissing
max_issues_repo_issues_event_min_datetime has 650 (61.6%) missing valuesMissing
max_issues_repo_issues_event_max_datetime has 650 (61.6%) missing valuesMissing
max_forks_count has 614 (58.1%) missing valuesMissing
max_forks_repo_forks_event_min_datetime has 614 (58.1%) missing valuesMissing
max_forks_repo_forks_event_max_datetime has 614 (58.1%) missing valuesMissing
max_line_length is highly skewed (γ1 = 23.04552003)Skewed

Reproduction

Analysis started2024-10-03 08:56:13.122205
Analysis finished2024-10-03 08:56:17.764262
Duration4.64 seconds
Software versionydata-profiling vv4.10.0
Download configurationconfig.json

Variables

hexsha
Text

Distinct1000
Distinct (%)94.7%
Missing0
Missing (%)0.0%
Memory size16.5 KiB
2024-10-03T10:56:17.825264image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/

Length

Max length40
Median length40
Mean length40
Min length40

Characters and Unicode

Total characters42240
Distinct characters16
Distinct categories1 ?
Distinct scripts1 ?
Distinct blocks1 ?
The Unicode Standard assigns character properties to each code point, which can be used to analyse textual variables.

Unique

Unique973 ?
Unique (%)92.1%

Sample

1st rowd99a1e98eccb58cbc0c0cef6e9e6702f33461b0e
2nd rowd99a20277c32bb1e28312f42ab6d732f38323169
3rd rowd99b5ab0ec594ac30b1d197b23a5cda7c48151d5
4th rowd99e8a9a95f28da6c2d4d1ee42e95a270ab08977
5th rowd99ed7256245422c7c5dd3c60b0661e4f78183ea
ValueCountFrequency (%)
8a91ba22fcba12ba8237fcf117a449485cdd3de1 7
 
0.7%
8a20872ac762ad5db9d06e05df401ef72a6b24c6 6
 
0.6%
0a33db09aef1c74c5ffed0995a5bf7a3bfec7f84 6
 
0.6%
6a7c09860b07db2134a799e024cf2d3ffbf7dc17 6
 
0.6%
6acc395ad3bfafbc612c2d532d32bbb5ce80e13f 5
 
0.5%
d9fe6882b9e62ad1b9764fdded272caab1b5cf79 4
 
0.4%
6ad4fd638f3c8440ee1f4046774d447aac8466fb 4
 
0.4%
0a20c183c03d4133fca24e84a8755331075102c6 3
 
0.3%
6a631c95edefbd6ccab71b999ffa359886535e5b 3
 
0.3%
6ada8fe0ced127e4eb158cbef0bc674aa2bd2da2 3
 
0.3%
Other values (990) 1009
95.5%
2024-10-03T10:56:17.987262image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/

Most occurring characters

ValueCountFrequency (%)
a 3391
 
8.0%
6 2852
 
6.8%
8 2730
 
6.5%
d 2675
 
6.3%
0 2649
 
6.3%
9 2627
 
6.2%
c 2618
 
6.2%
4 2582
 
6.1%
f 2547
 
6.0%
b 2542
 
6.0%
Other values (6) 15027
35.6%

Most occurring categories

ValueCountFrequency (%)
(unknown) 42240
100.0%

Most frequent character per category

(unknown)
ValueCountFrequency (%)
a 3391
 
8.0%
6 2852
 
6.8%
8 2730
 
6.5%
d 2675
 
6.3%
0 2649
 
6.3%
9 2627
 
6.2%
c 2618
 
6.2%
4 2582
 
6.1%
f 2547
 
6.0%
b 2542
 
6.0%
Other values (6) 15027
35.6%

Most occurring scripts

ValueCountFrequency (%)
(unknown) 42240
100.0%

Most frequent character per script

(unknown)
ValueCountFrequency (%)
a 3391
 
8.0%
6 2852
 
6.8%
8 2730
 
6.5%
d 2675
 
6.3%
0 2649
 
6.3%
9 2627
 
6.2%
c 2618
 
6.2%
4 2582
 
6.1%
f 2547
 
6.0%
b 2542
 
6.0%
Other values (6) 15027
35.6%

Most occurring blocks

ValueCountFrequency (%)
(unknown) 42240
100.0%

Most frequent character per block

(unknown)
ValueCountFrequency (%)
a 3391
 
8.0%
6 2852
 
6.8%
8 2730
 
6.5%
d 2675
 
6.3%
0 2649
 
6.3%
9 2627
 
6.2%
c 2618
 
6.2%
4 2582
 
6.1%
f 2547
 
6.0%
b 2542
 
6.0%
Other values (6) 15027
35.6%

size
Real number (ℝ)

Distinct941
Distinct (%)89.1%
Missing0
Missing (%)0.0%
Infinite0
Infinite (%)0.0%
Mean8144.553
Minimum10
Maximum252781
Zeros0
Zeros (%)0.0%
Negative0
Negative (%)0.0%
Memory size16.5 KiB
2024-10-03T10:56:18.075261image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/

Quantile statistics

Minimum10
5-th percentile193
Q11035
median2622.5
Q37645.5
95-th percentile32032
Maximum252781
Range252771
Interquartile range (IQR)6610.5

Descriptive statistics

Standard deviation18487.363
Coefficient of variation (CV)2.2699052
Kurtosis69.385801
Mean8144.553
Median Absolute Deviation (MAD)2056
Skewness6.9462205
Sum8600648
Variance3.4178261 × 108
MonotonicityNot monotonic
2024-10-03T10:56:18.159261image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
Histogram with fixed size bins (bins=50)
ValueCountFrequency (%)
31466 7
 
0.7%
11429 6
 
0.6%
69998 6
 
0.6%
13403 6
 
0.6%
4123 5
 
0.5%
1195 4
 
0.4%
9991 4
 
0.4%
2540 4
 
0.4%
987 3
 
0.3%
416 3
 
0.3%
Other values (931) 1008
95.5%
ValueCountFrequency (%)
10 1
0.1%
13 1
0.1%
22 1
0.1%
35 1
0.1%
36 1
0.1%
40 2
0.2%
41 1
0.1%
42 2
0.2%
45 1
0.1%
46 1
0.1%
ValueCountFrequency (%)
252781 1
0.1%
248866 1
0.1%
175651 1
0.1%
142263 1
0.1%
118727 1
0.1%
105704 1
0.1%
103584 1
0.1%
101780 1
0.1%
96782 1
0.1%
87582 1
0.1%

ext
Categorical

IMBALANCE 

Distinct3
Distinct (%)0.3%
Missing0
Missing (%)0.0%
Memory size16.5 KiB
py
1053 
gyp
 
2
bzl
 
1

Length

Max length3
Median length2
Mean length2.0028409
Min length2

Characters and Unicode

Total characters2115
Distinct characters6
Distinct categories1 ?
Distinct scripts1 ?
Distinct blocks1 ?
The Unicode Standard assigns character properties to each code point, which can be used to analyse textual variables.

Unique

Unique1 ?
Unique (%)0.1%

Sample

1st rowpy
2nd rowpy
3rd rowpy
4th rowpy
5th rowpy

Common Values

ValueCountFrequency (%)
py 1053
99.7%
gyp 2
 
0.2%
bzl 1
 
0.1%

Length

2024-10-03T10:56:18.231261image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
Histogram of lengths of the category

Common Values (Plot)

2024-10-03T10:56:18.289261image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
ValueCountFrequency (%)
py 1053
99.7%
gyp 2
 
0.2%
bzl 1
 
0.1%

Most occurring characters

ValueCountFrequency (%)
p 1055
49.9%
y 1055
49.9%
g 2
 
0.1%
b 1
 
< 0.1%
z 1
 
< 0.1%
l 1
 
< 0.1%

Most occurring categories

ValueCountFrequency (%)
(unknown) 2115
100.0%

Most frequent character per category

(unknown)
ValueCountFrequency (%)
p 1055
49.9%
y 1055
49.9%
g 2
 
0.1%
b 1
 
< 0.1%
z 1
 
< 0.1%
l 1
 
< 0.1%

Most occurring scripts

ValueCountFrequency (%)
(unknown) 2115
100.0%

Most frequent character per script

(unknown)
ValueCountFrequency (%)
p 1055
49.9%
y 1055
49.9%
g 2
 
0.1%
b 1
 
< 0.1%
z 1
 
< 0.1%
l 1
 
< 0.1%

Most occurring blocks

ValueCountFrequency (%)
(unknown) 2115
100.0%

Most frequent character per block

(unknown)
ValueCountFrequency (%)
p 1055
49.9%
y 1055
49.9%
g 2
 
0.1%
b 1
 
< 0.1%
z 1
 
< 0.1%
l 1
 
< 0.1%

lang
Categorical

CONSTANT 

Distinct1
Distinct (%)0.1%
Missing0
Missing (%)0.0%
Memory size16.5 KiB
Python
1056 

Length

Max length6
Median length6
Mean length6
Min length6

Characters and Unicode

Total characters6336
Distinct characters6
Distinct categories1 ?
Distinct scripts1 ?
Distinct blocks1 ?
The Unicode Standard assigns character properties to each code point, which can be used to analyse textual variables.

Unique

Unique0 ?
Unique (%)0.0%

Sample

1st rowPython
2nd rowPython
3rd rowPython
4th rowPython
5th rowPython

Common Values

ValueCountFrequency (%)
Python 1056
100.0%

Length

2024-10-03T10:56:18.350263image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
Histogram of lengths of the category

Common Values (Plot)

2024-10-03T10:56:18.401261image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
ValueCountFrequency (%)
python 1056
100.0%

Most occurring characters

ValueCountFrequency (%)
P 1056
16.7%
y 1056
16.7%
t 1056
16.7%
h 1056
16.7%
o 1056
16.7%
n 1056
16.7%

Most occurring categories

ValueCountFrequency (%)
(unknown) 6336
100.0%

Most frequent character per category

(unknown)
ValueCountFrequency (%)
P 1056
16.7%
y 1056
16.7%
t 1056
16.7%
h 1056
16.7%
o 1056
16.7%
n 1056
16.7%

Most occurring scripts

ValueCountFrequency (%)
(unknown) 6336
100.0%

Most frequent character per script

(unknown)
ValueCountFrequency (%)
P 1056
16.7%
y 1056
16.7%
t 1056
16.7%
h 1056
16.7%
o 1056
16.7%
n 1056
16.7%

Most occurring blocks

ValueCountFrequency (%)
(unknown) 6336
100.0%

Most frequent character per block

(unknown)
ValueCountFrequency (%)
P 1056
16.7%
y 1056
16.7%
t 1056
16.7%
h 1056
16.7%
o 1056
16.7%
n 1056
16.7%
Distinct979
Distinct (%)92.7%
Missing0
Missing (%)0.0%
Memory size16.5 KiB
2024-10-03T10:56:18.453262image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/

Length

Max length130
Median length80
Mean length33.768939
Min length4

Characters and Unicode

Total characters35660
Distinct characters71
Distinct categories1 ?
Distinct scripts1 ?
Distinct blocks1 ?
The Unicode Standard assigns character properties to each code point, which can be used to analyse textual variables.

Unique

Unique948 ?
Unique (%)89.8%

Sample

1st rowpublic_data/serializers.py
2nd rowquick_search/admin.py
3rd rowrasa/train.py
4th rowcoding_intereview/1475. Final Prices With a Special Discount in a Shop.py
5th rowrplugin/python3/denite/ui/default.py
ValueCountFrequency (%)
setup.py 13
 
1.1%
8
 
0.7%
main.py 8
 
0.7%
pandas/core/indexes/range.py 7
 
0.6%
model_selection/tests/test_search.py 6
 
0.5%
tests/python/unittest/test_tir_schedule_compute_inline.py 6
 
0.5%
python/tvm/contrib/nvcc.py 6
 
0.5%
flink-ai-flow/lib/notification_service/notification_service/mongo_event_storage.py 5
 
0.4%
var/spack/repos/builtin/packages/py-black/package.py 4
 
0.3%
lib/spack/spack/multimethod.py 4
 
0.3%
Other values (1045) 1084
94.2%
2024-10-03T10:56:18.624261image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/

Most occurring characters

ValueCountFrequency (%)
e 3069
 
8.6%
t 2718
 
7.6%
s 2507
 
7.0%
/ 2300
 
6.4%
p 2159
 
6.1%
a 1967
 
5.5%
o 1901
 
5.3%
i 1886
 
5.3%
r 1728
 
4.8%
n 1702
 
4.8%
Other values (61) 13723
38.5%

Most occurring categories

ValueCountFrequency (%)
(unknown) 35660
100.0%

Most frequent character per category

(unknown)
ValueCountFrequency (%)
e 3069
 
8.6%
t 2718
 
7.6%
s 2507
 
7.0%
/ 2300
 
6.4%
p 2159
 
6.1%
a 1967
 
5.5%
o 1901
 
5.3%
i 1886
 
5.3%
r 1728
 
4.8%
n 1702
 
4.8%
Other values (61) 13723
38.5%

Most occurring scripts

ValueCountFrequency (%)
(unknown) 35660
100.0%

Most frequent character per script

(unknown)
ValueCountFrequency (%)
e 3069
 
8.6%
t 2718
 
7.6%
s 2507
 
7.0%
/ 2300
 
6.4%
p 2159
 
6.1%
a 1967
 
5.5%
o 1901
 
5.3%
i 1886
 
5.3%
r 1728
 
4.8%
n 1702
 
4.8%
Other values (61) 13723
38.5%

Most occurring blocks

ValueCountFrequency (%)
(unknown) 35660
100.0%

Most frequent character per block

(unknown)
ValueCountFrequency (%)
e 3069
 
8.6%
t 2718
 
7.6%
s 2507
 
7.0%
/ 2300
 
6.4%
p 2159
 
6.1%
a 1967
 
5.5%
o 1901
 
5.3%
i 1886
 
5.3%
r 1728
 
4.8%
n 1702
 
4.8%
Other values (61) 13723
38.5%
Distinct988
Distinct (%)93.6%
Missing0
Missing (%)0.0%
Memory size16.5 KiB
2024-10-03T10:56:18.712261image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/

Length

Max length69
Median length48
Mean length22.212121
Min length8

Characters and Unicode

Total characters23456
Distinct characters66
Distinct categories1 ?
Distinct scripts1 ?
Distinct blocks1 ?
The Unicode Standard assigns character properties to each code point, which can be used to analyse textual variables.

Unique

Unique950 ?
Unique (%)90.0%

Sample

1st rowMTES-MCT/sparte
2nd rownaman1901/django-quick-search
3rd rowAmirali-Shirkh/rasa-for-botfront
4th rowJahidul007/Python-Bootcamp
5th rowtimgates42/denite.nvim
ValueCountFrequency (%)
mujtahidalam/pandas 7
 
0.7%
ntanhbk44/tvm 6
 
0.6%
xiebaiyuan/tvm 6
 
0.6%
jessica-tu/jupyter 6
 
0.6%
lisy09/flink-ai-extended 5
 
0.5%
dwstreetnnl/spack 4
 
0.4%
kkauder/spack 4
 
0.4%
enjoylifefund/machighsierra-py36-pkgs 3
 
0.3%
pierre-haessig/matplotlib 3
 
0.3%
geos-esm/aeroapps 3
 
0.3%
Other values (978) 1009
95.5%
2024-10-03T10:56:18.898261image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/

Most occurring characters

ValueCountFrequency (%)
a 1857
 
7.9%
e 1834
 
7.8%
i 1419
 
6.0%
o 1396
 
6.0%
n 1307
 
5.6%
r 1279
 
5.5%
t 1277
 
5.4%
s 1162
 
5.0%
/ 1056
 
4.5%
l 890
 
3.8%
Other values (56) 9979
42.5%

Most occurring categories

ValueCountFrequency (%)
(unknown) 23456
100.0%

Most frequent character per category

(unknown)
ValueCountFrequency (%)
a 1857
 
7.9%
e 1834
 
7.8%
i 1419
 
6.0%
o 1396
 
6.0%
n 1307
 
5.6%
r 1279
 
5.5%
t 1277
 
5.4%
s 1162
 
5.0%
/ 1056
 
4.5%
l 890
 
3.8%
Other values (56) 9979
42.5%

Most occurring scripts

ValueCountFrequency (%)
(unknown) 23456
100.0%

Most frequent character per script

(unknown)
ValueCountFrequency (%)
a 1857
 
7.9%
e 1834
 
7.8%
i 1419
 
6.0%
o 1396
 
6.0%
n 1307
 
5.6%
r 1279
 
5.5%
t 1277
 
5.4%
s 1162
 
5.0%
/ 1056
 
4.5%
l 890
 
3.8%
Other values (56) 9979
42.5%

Most occurring blocks

ValueCountFrequency (%)
(unknown) 23456
100.0%

Most frequent character per block

(unknown)
ValueCountFrequency (%)
a 1857
 
7.9%
e 1834
 
7.8%
i 1419
 
6.0%
o 1396
 
6.0%
n 1307
 
5.6%
r 1279
 
5.5%
t 1277
 
5.4%
s 1162
 
5.0%
/ 1056
 
4.5%
l 890
 
3.8%
Other values (56) 9979
42.5%
Distinct988
Distinct (%)93.6%
Missing0
Missing (%)0.0%
Memory size16.5 KiB
2024-10-03T10:56:19.003261image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/

Length

Max length40
Median length40
Mean length40
Min length40

Characters and Unicode

Total characters42240
Distinct characters16
Distinct categories1 ?
Distinct scripts1 ?
Distinct blocks1 ?
The Unicode Standard assigns character properties to each code point, which can be used to analyse textual variables.

Unique

Unique950 ?
Unique (%)90.0%

Sample

1st row3b8ae6d21da81ca761d64ae9dfe2c8f54487211c
2nd row7b93554ed9fa4721e52372f9fd1a395d94cc04a7
3rd row36aa24ad31241c5d1a180bbe34e1c8c50da40ff7
4th row3c870587465ff66c2c1871c8d3c4eea72463abda
5th row12a9b5456f5a4600afeb0ba284ce1098bd35e501
ValueCountFrequency (%)
526468c8fe6fc5157aaf2fce327c5ab2a3350f49 7
 
0.7%
f89a929f09f7a0b0ccd0f4d46dc2b1c562839087 6
 
0.6%
726239d788e3b90cbe4818271ca5361c46d8d246 6
 
0.6%
917e02bc29e0fa06bd8adb25fe5388ac381ec829 6
 
0.6%
011a5a332f7641f66086653e715d0596eab2e107 5
 
0.5%
8f929707147c49606d00386a10161529dad4ec56 4
 
0.4%
6ae8d5c380c1f42094b05d38be26b03650aafb39 4
 
0.4%
5668b5785296b314ea1321057420bcd077dba9ea 3
 
0.3%
0d945044ca3fbf98cad55912584ef80911f330c6 3
 
0.3%
874dad6f34420c014d98eccbe81a061bdc0110cf 3
 
0.3%
Other values (978) 1009
95.5%
2024-10-03T10:56:19.175261image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/

Most occurring characters

ValueCountFrequency (%)
4 2732
 
6.5%
6 2686
 
6.4%
9 2676
 
6.3%
a 2659
 
6.3%
0 2656
 
6.3%
d 2652
 
6.3%
8 2647
 
6.3%
b 2641
 
6.3%
2 2639
 
6.2%
e 2639
 
6.2%
Other values (6) 15613
37.0%

Most occurring categories

ValueCountFrequency (%)
(unknown) 42240
100.0%

Most frequent character per category

(unknown)
ValueCountFrequency (%)
4 2732
 
6.5%
6 2686
 
6.4%
9 2676
 
6.3%
a 2659
 
6.3%
0 2656
 
6.3%
d 2652
 
6.3%
8 2647
 
6.3%
b 2641
 
6.3%
2 2639
 
6.2%
e 2639
 
6.2%
Other values (6) 15613
37.0%

Most occurring scripts

ValueCountFrequency (%)
(unknown) 42240
100.0%

Most frequent character per script

(unknown)
ValueCountFrequency (%)
4 2732
 
6.5%
6 2686
 
6.4%
9 2676
 
6.3%
a 2659
 
6.3%
0 2656
 
6.3%
d 2652
 
6.3%
8 2647
 
6.3%
b 2641
 
6.3%
2 2639
 
6.2%
e 2639
 
6.2%
Other values (6) 15613
37.0%

Most occurring blocks

ValueCountFrequency (%)
(unknown) 42240
100.0%

Most frequent character per block

(unknown)
ValueCountFrequency (%)
4 2732
 
6.5%
6 2686
 
6.4%
9 2676
 
6.3%
a 2659
 
6.3%
0 2656
 
6.3%
d 2652
 
6.3%
8 2647
 
6.3%
b 2641
 
6.3%
2 2639
 
6.2%
e 2639
 
6.2%
Other values (6) 15613
37.0%

max_stars_repo_licenses
Categorical

IMBALANCE 

Distinct28
Distinct (%)2.7%
Missing0
Missing (%)0.0%
Memory size16.5 KiB
MIT
558 
Apache-2.0
279 
BSD-3-Clause
112 
Unlicense
 
22
BSD-2-Clause
 
18
Other values (23)
67 

Length

Max length36
Median length3
Mean length6.4943182
Min length3

Characters and Unicode

Total characters6858
Distinct characters46
Distinct categories1 ?
Distinct scripts1 ?
Distinct blocks1 ?
The Unicode Standard assigns character properties to each code point, which can be used to analyse textual variables.

Unique

Unique9 ?
Unique (%)0.9%

Sample

1st rowMIT
2nd rowMIT
3rd rowApache-2.0
4th rowMIT
5th rowMIT

Common Values

ValueCountFrequency (%)
MIT 558
52.8%
Apache-2.0 279
26.4%
BSD-3-Clause 112
 
10.6%
Unlicense 22
 
2.1%
BSD-2-Clause 18
 
1.7%
ECL-2.0 13
 
1.2%
CC0-1.0 8
 
0.8%
CC-BY-4.0 5
 
0.5%
PSF-2.0 5
 
0.5%
BSD-3-Clause-No-Nuclear-License-2014 4
 
0.4%
Other values (18) 32
 
3.0%

Length

2024-10-03T10:56:19.263261image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
Histogram of lengths of the category
ValueCountFrequency (%)
mit 558
52.8%
apache-2.0 279
26.4%
bsd-3-clause 112
 
10.6%
unlicense 22
 
2.1%
bsd-2-clause 18
 
1.7%
ecl-2.0 13
 
1.2%
cc0-1.0 8
 
0.8%
cc-by-4.0 5
 
0.5%
psf-2.0 5
 
0.5%
bsd-3-clause-no-nuclear-license-2014 4
 
0.4%
Other values (18) 32
 
3.0%

Most occurring characters

ValueCountFrequency (%)
- 629
 
9.2%
T 565
 
8.2%
M 564
 
8.2%
I 562
 
8.2%
e 480
 
7.0%
a 424
 
6.2%
0 337
 
4.9%
. 328
 
4.8%
2 325
 
4.7%
c 312
 
4.5%
Other values (36) 2332
34.0%

Most occurring categories

ValueCountFrequency (%)
(unknown) 6858
100.0%

Most frequent character per category

(unknown)
ValueCountFrequency (%)
- 629
 
9.2%
T 565
 
8.2%
M 564
 
8.2%
I 562
 
8.2%
e 480
 
7.0%
a 424
 
6.2%
0 337
 
4.9%
. 328
 
4.8%
2 325
 
4.7%
c 312
 
4.5%
Other values (36) 2332
34.0%

Most occurring scripts

ValueCountFrequency (%)
(unknown) 6858
100.0%

Most frequent character per script

(unknown)
ValueCountFrequency (%)
- 629
 
9.2%
T 565
 
8.2%
M 564
 
8.2%
I 562
 
8.2%
e 480
 
7.0%
a 424
 
6.2%
0 337
 
4.9%
. 328
 
4.8%
2 325
 
4.7%
c 312
 
4.5%
Other values (36) 2332
34.0%

Most occurring blocks

ValueCountFrequency (%)
(unknown) 6858
100.0%

Most frequent character per block

(unknown)
ValueCountFrequency (%)
- 629
 
9.2%
T 565
 
8.2%
M 564
 
8.2%
I 562
 
8.2%
e 480
 
7.0%
a 424
 
6.2%
0 337
 
4.9%
. 328
 
4.8%
2 325
 
4.7%
c 312
 
4.5%
Other values (36) 2332
34.0%

max_stars_count
Real number (ℝ)

MISSING 

Distinct132
Distinct (%)23.3%
Missing490
Missing (%)46.4%
Infinite0
Infinite (%)0.0%
Mean293.9788
Minimum1
Maximum30023
Zeros0
Zeros (%)0.0%
Negative0
Negative (%)0.0%
Memory size16.5 KiB
2024-10-03T10:56:19.337262image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/

Quantile statistics

Minimum1
5-th percentile1
Q11
median3
Q322.75
95-th percentile852
Maximum30023
Range30022
Interquartile range (IQR)21.75

Descriptive statistics

Standard deviation1883.2976
Coefficient of variation (CV)6.4062361
Kurtosis152.09445
Mean293.9788
Median Absolute Deviation (MAD)2
Skewness11.497477
Sum166392
Variance3546809.8
MonotonicityNot monotonic
2024-10-03T10:56:19.418262image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
Histogram with fixed size bins (bins=50)
ValueCountFrequency (%)
1 172
 
16.3%
2 66
 
6.2%
3 47
 
4.5%
4 20
 
1.9%
5 18
 
1.7%
11 13
 
1.2%
8 13
 
1.2%
16 9
 
0.9%
10 9
 
0.9%
6 8
 
0.8%
Other values (122) 191
 
18.1%
(Missing) 490
46.4%
ValueCountFrequency (%)
1 172
16.3%
2 66
 
6.2%
3 47
 
4.5%
4 20
 
1.9%
5 18
 
1.7%
6 8
 
0.8%
7 7
 
0.7%
8 13
 
1.2%
9 7
 
0.7%
10 9
 
0.9%
ValueCountFrequency (%)
30023 1
0.1%
21382 1
0.1%
17769 1
0.1%
10882 1
0.1%
7482 1
0.1%
6342 1
0.1%
5905 1
0.1%
5079 1
0.1%
4145 1
0.1%
3612 1
0.1%
Distinct524
Distinct (%)92.6%
Missing490
Missing (%)46.4%
Memory size16.5 KiB
Minimum2015-01-01 03:39:46+00:00
Maximum2022-03-29 04:41:22+00:00
2024-10-03T10:56:19.497261image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:19.579261image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
Histogram with fixed size bins (bins=50)
Distinct524
Distinct (%)92.6%
Missing490
Missing (%)46.4%
Memory size16.5 KiB
Minimum2015-02-26 23:39:39+00:00
Maximum2022-03-31 22:25:04+00:00
2024-10-03T10:56:19.660261image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:19.741261image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
Histogram with fixed size bins (bins=50)
Distinct979
Distinct (%)92.7%
Missing0
Missing (%)0.0%
Memory size16.5 KiB
2024-10-03T10:56:19.810261image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/

Length

Max length130
Median length80
Mean length33.899621
Min length4

Characters and Unicode

Total characters35798
Distinct characters71
Distinct categories1 ?
Distinct scripts1 ?
Distinct blocks1 ?
The Unicode Standard assigns character properties to each code point, which can be used to analyse textual variables.

Unique

Unique948 ?
Unique (%)89.8%

Sample

1st rowpublic_data/serializers.py
2nd rowquick_search/admin.py
3rd rowrasa/train.py
4th rowcoding_intereview/1475. Final Prices With a Special Discount in a Shop.py
5th rowrplugin/python3/denite/ui/default.py
ValueCountFrequency (%)
setup.py 13
 
1.1%
8
 
0.7%
main.py 8
 
0.7%
pandas/core/indexes/range.py 7
 
0.6%
model_selection/tests/test_search.py 6
 
0.5%
tests/python/unittest/test_tir_schedule_compute_inline.py 6
 
0.5%
python/tvm/contrib/nvcc.py 6
 
0.5%
flink-ai-flow/lib/notification_service/notification_service/mongo_event_storage.py 5
 
0.4%
var/spack/repos/builtin/packages/py-black/package.py 4
 
0.3%
lib/spack/spack/multimethod.py 4
 
0.3%
Other values (1045) 1084
94.2%
2024-10-03T10:56:19.975261image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/

Most occurring characters

ValueCountFrequency (%)
e 3078
 
8.6%
t 2728
 
7.6%
s 2518
 
7.0%
/ 2314
 
6.5%
p 2167
 
6.1%
a 1976
 
5.5%
o 1909
 
5.3%
i 1893
 
5.3%
r 1730
 
4.8%
n 1704
 
4.8%
Other values (61) 13781
38.5%

Most occurring categories

ValueCountFrequency (%)
(unknown) 35798
100.0%

Most frequent character per category

(unknown)
ValueCountFrequency (%)
e 3078
 
8.6%
t 2728
 
7.6%
s 2518
 
7.0%
/ 2314
 
6.5%
p 2167
 
6.1%
a 1976
 
5.5%
o 1909
 
5.3%
i 1893
 
5.3%
r 1730
 
4.8%
n 1704
 
4.8%
Other values (61) 13781
38.5%

Most occurring scripts

ValueCountFrequency (%)
(unknown) 35798
100.0%

Most frequent character per script

(unknown)
ValueCountFrequency (%)
e 3078
 
8.6%
t 2728
 
7.6%
s 2518
 
7.0%
/ 2314
 
6.5%
p 2167
 
6.1%
a 1976
 
5.5%
o 1909
 
5.3%
i 1893
 
5.3%
r 1730
 
4.8%
n 1704
 
4.8%
Other values (61) 13781
38.5%

Most occurring blocks

ValueCountFrequency (%)
(unknown) 35798
100.0%

Most frequent character per block

(unknown)
ValueCountFrequency (%)
e 3078
 
8.6%
t 2728
 
7.6%
s 2518
 
7.0%
/ 2314
 
6.5%
p 2167
 
6.1%
a 1976
 
5.5%
o 1909
 
5.3%
i 1893
 
5.3%
r 1730
 
4.8%
n 1704
 
4.8%
Other values (61) 13781
38.5%
Distinct988
Distinct (%)93.6%
Missing0
Missing (%)0.0%
Memory size16.5 KiB
2024-10-03T10:56:20.064262image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/

Length

Max length69
Median length48
Mean length22.350379
Min length8

Characters and Unicode

Total characters23602
Distinct characters66
Distinct categories1 ?
Distinct scripts1 ?
Distinct blocks1 ?
The Unicode Standard assigns character properties to each code point, which can be used to analyse textual variables.

Unique

Unique950 ?
Unique (%)90.0%

Sample

1st rowMTES-MCT/sparte
2nd rownaman1901/django-quick-search
3rd rowAmirali-Shirkh/rasa-for-botfront
4th rowpurusharthmalik/Python-Bootcamp
5th rowtimgates42/denite.nvim
ValueCountFrequency (%)
mujtahidalam/pandas 7
 
0.7%
ntanhbk44/tvm 6
 
0.6%
xiebaiyuan/tvm 6
 
0.6%
jessica-tu/jupyter 6
 
0.6%
sentimentist/flink-ai-extended 5
 
0.5%
dwstreetnnl/spack 4
 
0.4%
kkauder/spack 4
 
0.4%
enjoylifefund/machighsierra-py36-pkgs 3
 
0.3%
pierre-haessig/matplotlib 3
 
0.3%
geos-esm/aeroapps 3
 
0.3%
Other values (978) 1009
95.5%
2024-10-03T10:56:20.249261image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/

Most occurring characters

ValueCountFrequency (%)
a 1859
 
7.9%
e 1852
 
7.8%
i 1445
 
6.1%
o 1407
 
6.0%
n 1326
 
5.6%
t 1294
 
5.5%
r 1285
 
5.4%
s 1161
 
4.9%
/ 1056
 
4.5%
l 892
 
3.8%
Other values (56) 10025
42.5%

Most occurring categories

ValueCountFrequency (%)
(unknown) 23602
100.0%

Most frequent character per category

(unknown)
ValueCountFrequency (%)
a 1859
 
7.9%
e 1852
 
7.8%
i 1445
 
6.1%
o 1407
 
6.0%
n 1326
 
5.6%
t 1294
 
5.5%
r 1285
 
5.4%
s 1161
 
4.9%
/ 1056
 
4.5%
l 892
 
3.8%
Other values (56) 10025
42.5%

Most occurring scripts

ValueCountFrequency (%)
(unknown) 23602
100.0%

Most frequent character per script

(unknown)
ValueCountFrequency (%)
a 1859
 
7.9%
e 1852
 
7.8%
i 1445
 
6.1%
o 1407
 
6.0%
n 1326
 
5.6%
t 1294
 
5.5%
r 1285
 
5.4%
s 1161
 
4.9%
/ 1056
 
4.5%
l 892
 
3.8%
Other values (56) 10025
42.5%

Most occurring blocks

ValueCountFrequency (%)
(unknown) 23602
100.0%

Most frequent character per block

(unknown)
ValueCountFrequency (%)
a 1859
 
7.9%
e 1852
 
7.8%
i 1445
 
6.1%
o 1407
 
6.0%
n 1326
 
5.6%
t 1294
 
5.5%
r 1285
 
5.4%
s 1161
 
4.9%
/ 1056
 
4.5%
l 892
 
3.8%
Other values (56) 10025
42.5%
Distinct988
Distinct (%)93.6%
Missing0
Missing (%)0.0%
Memory size16.5 KiB
2024-10-03T10:56:20.354264image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/

Length

Max length40
Median length40
Mean length40
Min length40

Characters and Unicode

Total characters42240
Distinct characters16
Distinct categories1 ?
Distinct scripts1 ?
Distinct blocks1 ?
The Unicode Standard assigns character properties to each code point, which can be used to analyse textual variables.

Unique

Unique950 ?
Unique (%)90.0%

Sample

1st row3b8ae6d21da81ca761d64ae9dfe2c8f54487211c
2nd row7b93554ed9fa4721e52372f9fd1a395d94cc04a7
3rd row36aa24ad31241c5d1a180bbe34e1c8c50da40ff7
4th row2ed1cf886d1081de200b0fdd4cb4e28008c7e3d1
5th row12a9b5456f5a4600afeb0ba284ce1098bd35e501
ValueCountFrequency (%)
526468c8fe6fc5157aaf2fce327c5ab2a3350f49 7
 
0.7%
f89a929f09f7a0b0ccd0f4d46dc2b1c562839087 6
 
0.6%
726239d788e3b90cbe4818271ca5361c46d8d246 6
 
0.6%
917e02bc29e0fa06bd8adb25fe5388ac381ec829 6
 
0.6%
689d000f2db8919fd80e0725a1609918ca4a26f4 5
 
0.5%
8f929707147c49606d00386a10161529dad4ec56 4
 
0.4%
6ae8d5c380c1f42094b05d38be26b03650aafb39 4
 
0.4%
5668b5785296b314ea1321057420bcd077dba9ea 3
 
0.3%
0d945044ca3fbf98cad55912584ef80911f330c6 3
 
0.3%
874dad6f34420c014d98eccbe81a061bdc0110cf 3
 
0.3%
Other values (978) 1009
95.5%
2024-10-03T10:56:20.525262image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/

Most occurring characters

ValueCountFrequency (%)
4 2748
 
6.5%
9 2709
 
6.4%
d 2700
 
6.4%
6 2695
 
6.4%
0 2666
 
6.3%
a 2655
 
6.3%
b 2653
 
6.3%
8 2650
 
6.3%
2 2648
 
6.3%
c 2615
 
6.2%
Other values (6) 15501
36.7%

Most occurring categories

ValueCountFrequency (%)
(unknown) 42240
100.0%

Most frequent character per category

(unknown)
ValueCountFrequency (%)
4 2748
 
6.5%
9 2709
 
6.4%
d 2700
 
6.4%
6 2695
 
6.4%
0 2666
 
6.3%
a 2655
 
6.3%
b 2653
 
6.3%
8 2650
 
6.3%
2 2648
 
6.3%
c 2615
 
6.2%
Other values (6) 15501
36.7%

Most occurring scripts

ValueCountFrequency (%)
(unknown) 42240
100.0%

Most frequent character per script

(unknown)
ValueCountFrequency (%)
4 2748
 
6.5%
9 2709
 
6.4%
d 2700
 
6.4%
6 2695
 
6.4%
0 2666
 
6.3%
a 2655
 
6.3%
b 2653
 
6.3%
8 2650
 
6.3%
2 2648
 
6.3%
c 2615
 
6.2%
Other values (6) 15501
36.7%

Most occurring blocks

ValueCountFrequency (%)
(unknown) 42240
100.0%

Most frequent character per block

(unknown)
ValueCountFrequency (%)
4 2748
 
6.5%
9 2709
 
6.4%
d 2700
 
6.4%
6 2695
 
6.4%
0 2666
 
6.3%
a 2655
 
6.3%
b 2653
 
6.3%
8 2650
 
6.3%
2 2648
 
6.3%
c 2615
 
6.2%
Other values (6) 15501
36.7%

max_issues_repo_licenses
Categorical

IMBALANCE 

Distinct28
Distinct (%)2.7%
Missing0
Missing (%)0.0%
Memory size16.5 KiB
MIT
558 
Apache-2.0
279 
BSD-3-Clause
111 
Unlicense
 
22
BSD-2-Clause
 
18
Other values (23)
68 

Length

Max length36
Median length3
Mean length6.4895833
Min length3

Characters and Unicode

Total characters6853
Distinct characters46
Distinct categories1 ?
Distinct scripts1 ?
Distinct blocks1 ?
The Unicode Standard assigns character properties to each code point, which can be used to analyse textual variables.

Unique

Unique9 ?
Unique (%)0.9%

Sample

1st rowMIT
2nd rowMIT
3rd rowApache-2.0
4th rowMIT
5th rowMIT

Common Values

ValueCountFrequency (%)
MIT 558
52.8%
Apache-2.0 279
26.4%
BSD-3-Clause 111
 
10.5%
Unlicense 22
 
2.1%
BSD-2-Clause 18
 
1.7%
ECL-2.0 13
 
1.2%
CC0-1.0 8
 
0.8%
PSF-2.0 6
 
0.6%
CC-BY-4.0 5
 
0.5%
MIT-0 4
 
0.4%
Other values (18) 32
 
3.0%

Length

2024-10-03T10:56:20.697261image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
Histogram of lengths of the category
ValueCountFrequency (%)
mit 558
52.8%
apache-2.0 279
26.4%
bsd-3-clause 111
 
10.5%
unlicense 22
 
2.1%
bsd-2-clause 18
 
1.7%
ecl-2.0 13
 
1.2%
cc0-1.0 8
 
0.8%
psf-2.0 6
 
0.6%
cc-by-4.0 5
 
0.5%
mit-0 4
 
0.4%
Other values (18) 32
 
3.0%

Most occurring characters

ValueCountFrequency (%)
- 628
 
9.2%
T 565
 
8.2%
M 564
 
8.2%
I 562
 
8.2%
e 479
 
7.0%
a 423
 
6.2%
0 338
 
4.9%
. 329
 
4.8%
2 326
 
4.8%
c 312
 
4.6%
Other values (36) 2327
34.0%

Most occurring categories

ValueCountFrequency (%)
(unknown) 6853
100.0%

Most frequent character per category

(unknown)
ValueCountFrequency (%)
- 628
 
9.2%
T 565
 
8.2%
M 564
 
8.2%
I 562
 
8.2%
e 479
 
7.0%
a 423
 
6.2%
0 338
 
4.9%
. 329
 
4.8%
2 326
 
4.8%
c 312
 
4.6%
Other values (36) 2327
34.0%

Most occurring scripts

ValueCountFrequency (%)
(unknown) 6853
100.0%

Most frequent character per script

(unknown)
ValueCountFrequency (%)
- 628
 
9.2%
T 565
 
8.2%
M 564
 
8.2%
I 562
 
8.2%
e 479
 
7.0%
a 423
 
6.2%
0 338
 
4.9%
. 329
 
4.8%
2 326
 
4.8%
c 312
 
4.6%
Other values (36) 2327
34.0%

Most occurring blocks

ValueCountFrequency (%)
(unknown) 6853
100.0%

Most frequent character per block

(unknown)
ValueCountFrequency (%)
- 628
 
9.2%
T 565
 
8.2%
M 564
 
8.2%
I 562
 
8.2%
e 479
 
7.0%
a 423
 
6.2%
0 338
 
4.9%
. 329
 
4.8%
2 326
 
4.8%
c 312
 
4.6%
Other values (36) 2327
34.0%

max_issues_count
Real number (ℝ)

MISSING 

Distinct124
Distinct (%)30.5%
Missing650
Missing (%)61.6%
Infinite0
Infinite (%)0.0%
Mean440.69704
Minimum1
Maximum30371
Zeros0
Zeros (%)0.0%
Negative0
Negative (%)0.0%
Memory size16.5 KiB
2024-10-03T10:56:20.770261image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/

Quantile statistics

Minimum1
5-th percentile1
Q12
median8
Q360.75
95-th percentile966.5
Maximum30371
Range30370
Interquartile range (IQR)58.75

Descriptive statistics

Standard deviation2743.6052
Coefficient of variation (CV)6.2256038
Kurtosis90.021206
Mean440.69704
Median Absolute Deviation (MAD)7
Skewness9.2249806
Sum178923
Variance7527369.4
MonotonicityNot monotonic
2024-10-03T10:56:20.851261image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
Histogram with fixed size bins (bins=50)
ValueCountFrequency (%)
1 77
 
7.3%
2 33
 
3.1%
3 25
 
2.4%
4 19
 
1.8%
5 15
 
1.4%
7 14
 
1.3%
8 12
 
1.1%
6 10
 
0.9%
17 9
 
0.9%
12 8
 
0.8%
Other values (114) 184
 
17.4%
(Missing) 650
61.6%
ValueCountFrequency (%)
1 77
7.3%
2 33
3.1%
3 25
 
2.4%
4 19
 
1.8%
5 15
 
1.4%
6 10
 
0.9%
7 14
 
1.3%
8 12
 
1.1%
9 7
 
0.7%
10 6
 
0.6%
ValueCountFrequency (%)
30371 2
0.2%
24710 1
0.1%
19689 1
0.1%
9565 1
0.1%
9317 1
0.1%
6097 1
0.1%
4640 1
0.1%
2851 1
0.1%
2543 1
0.1%
2198 1
0.1%
Distinct372
Distinct (%)91.6%
Missing650
Missing (%)61.6%
Memory size16.5 KiB
Minimum2015-01-01 08:06:24+00:00
Maximum2022-03-19 18:09:39+00:00
2024-10-03T10:56:20.929262image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:21.011261image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
Histogram with fixed size bins (bins=50)
Distinct372
Distinct (%)91.6%
Missing650
Missing (%)61.6%
Memory size16.5 KiB
Minimum2015-01-13 09:14:47+00:00
Maximum2022-03-31 23:59:30+00:00
2024-10-03T10:56:21.088261image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:21.171262image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
Histogram with fixed size bins (bins=50)
Distinct979
Distinct (%)92.7%
Missing0
Missing (%)0.0%
Memory size16.5 KiB
2024-10-03T10:56:21.242261image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/

Length

Max length131
Median length80
Mean length33.905303
Min length4

Characters and Unicode

Total characters35804
Distinct characters71
Distinct categories1 ?
Distinct scripts1 ?
Distinct blocks1 ?
The Unicode Standard assigns character properties to each code point, which can be used to analyse textual variables.

Unique

Unique948 ?
Unique (%)89.8%

Sample

1st rowpublic_data/serializers.py
2nd rowquick_search/admin.py
3rd rowrasa/train.py
4th rowcoding_intereview/1475. Final Prices With a Special Discount in a Shop.py
5th rowrplugin/python3/denite/ui/default.py
ValueCountFrequency (%)
setup.py 13
 
1.1%
8
 
0.7%
main.py 8
 
0.7%
pandas/core/indexes/range.py 7
 
0.6%
model_selection/tests/test_search.py 6
 
0.5%
tests/python/unittest/test_tir_schedule_compute_inline.py 6
 
0.5%
python/tvm/contrib/nvcc.py 6
 
0.5%
flink-ai-flow/lib/notification_service/notification_service/mongo_event_storage.py 5
 
0.4%
var/spack/repos/builtin/packages/py-black/package.py 4
 
0.3%
lib/spack/spack/multimethod.py 4
 
0.3%
Other values (1045) 1084
94.2%
2024-10-03T10:56:21.417264image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/

Most occurring characters

ValueCountFrequency (%)
e 3078
 
8.6%
t 2728
 
7.6%
s 2518
 
7.0%
/ 2312
 
6.5%
p 2166
 
6.0%
a 1975
 
5.5%
o 1910
 
5.3%
i 1894
 
5.3%
r 1732
 
4.8%
n 1706
 
4.8%
Other values (61) 13785
38.5%

Most occurring categories

ValueCountFrequency (%)
(unknown) 35804
100.0%

Most frequent character per category

(unknown)
ValueCountFrequency (%)
e 3078
 
8.6%
t 2728
 
7.6%
s 2518
 
7.0%
/ 2312
 
6.5%
p 2166
 
6.0%
a 1975
 
5.5%
o 1910
 
5.3%
i 1894
 
5.3%
r 1732
 
4.8%
n 1706
 
4.8%
Other values (61) 13785
38.5%

Most occurring scripts

ValueCountFrequency (%)
(unknown) 35804
100.0%

Most frequent character per script

(unknown)
ValueCountFrequency (%)
e 3078
 
8.6%
t 2728
 
7.6%
s 2518
 
7.0%
/ 2312
 
6.5%
p 2166
 
6.0%
a 1975
 
5.5%
o 1910
 
5.3%
i 1894
 
5.3%
r 1732
 
4.8%
n 1706
 
4.8%
Other values (61) 13785
38.5%

Most occurring blocks

ValueCountFrequency (%)
(unknown) 35804
100.0%

Most frequent character per block

(unknown)
ValueCountFrequency (%)
e 3078
 
8.6%
t 2728
 
7.6%
s 2518
 
7.0%
/ 2312
 
6.5%
p 2166
 
6.0%
a 1975
 
5.5%
o 1910
 
5.3%
i 1894
 
5.3%
r 1732
 
4.8%
n 1706
 
4.8%
Other values (61) 13785
38.5%
Distinct988
Distinct (%)93.6%
Missing0
Missing (%)0.0%
Memory size16.5 KiB
2024-10-03T10:56:21.507304image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/

Length

Max length69
Median length48
Mean length22.375947
Min length8

Characters and Unicode

Total characters23629
Distinct characters66
Distinct categories1 ?
Distinct scripts1 ?
Distinct blocks1 ?
The Unicode Standard assigns character properties to each code point, which can be used to analyse textual variables.

Unique

Unique950 ?
Unique (%)90.0%

Sample

1st rowMTES-MCT/sparte
2nd rowHereWithoutPermission/django-quick-search
3rd rowAmirali-Shirkh/rasa-for-botfront
4th rowpurusharthmalik/Python-Bootcamp
5th rowtimgates42/denite.nvim
ValueCountFrequency (%)
mujtahidalam/pandas 7
 
0.7%
ntanhbk44/tvm 6
 
0.6%
xiebaiyuan/tvm 6
 
0.6%
jessica-tu/jupyter 6
 
0.6%
sentimentist/flink-ai-extended 5
 
0.5%
dwstreetnnl/spack 4
 
0.4%
kkauder/spack 4
 
0.4%
enjoylifefund/machighsierra-py36-pkgs 3
 
0.3%
pierre-haessig/matplotlib 3
 
0.3%
geos-esm/aeroapps 3
 
0.3%
Other values (978) 1009
95.5%
2024-10-03T10:56:21.693262image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/

Most occurring characters

ValueCountFrequency (%)
a 1858
 
7.9%
e 1855
 
7.9%
i 1454
 
6.2%
o 1407
 
6.0%
n 1324
 
5.6%
t 1306
 
5.5%
r 1285
 
5.4%
s 1175
 
5.0%
/ 1056
 
4.5%
l 897
 
3.8%
Other values (56) 10012
42.4%

Most occurring categories

ValueCountFrequency (%)
(unknown) 23629
100.0%

Most frequent character per category

(unknown)
ValueCountFrequency (%)
a 1858
 
7.9%
e 1855
 
7.9%
i 1454
 
6.2%
o 1407
 
6.0%
n 1324
 
5.6%
t 1306
 
5.5%
r 1285
 
5.4%
s 1175
 
5.0%
/ 1056
 
4.5%
l 897
 
3.8%
Other values (56) 10012
42.4%

Most occurring scripts

ValueCountFrequency (%)
(unknown) 23629
100.0%

Most frequent character per script

(unknown)
ValueCountFrequency (%)
a 1858
 
7.9%
e 1855
 
7.9%
i 1454
 
6.2%
o 1407
 
6.0%
n 1324
 
5.6%
t 1306
 
5.5%
r 1285
 
5.4%
s 1175
 
5.0%
/ 1056
 
4.5%
l 897
 
3.8%
Other values (56) 10012
42.4%

Most occurring blocks

ValueCountFrequency (%)
(unknown) 23629
100.0%

Most frequent character per block

(unknown)
ValueCountFrequency (%)
a 1858
 
7.9%
e 1855
 
7.9%
i 1454
 
6.2%
o 1407
 
6.0%
n 1324
 
5.6%
t 1306
 
5.5%
r 1285
 
5.4%
s 1175
 
5.0%
/ 1056
 
4.5%
l 897
 
3.8%
Other values (56) 10012
42.4%
Distinct988
Distinct (%)93.6%
Missing0
Missing (%)0.0%
Memory size16.5 KiB
2024-10-03T10:56:21.797262image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/

Length

Max length40
Median length40
Mean length40
Min length40

Characters and Unicode

Total characters42240
Distinct characters16
Distinct categories1 ?
Distinct scripts1 ?
Distinct blocks1 ?
The Unicode Standard assigns character properties to each code point, which can be used to analyse textual variables.

Unique

Unique950 ?
Unique (%)90.0%

Sample

1st row3b8ae6d21da81ca761d64ae9dfe2c8f54487211c
2nd row7b93554ed9fa4721e52372f9fd1a395d94cc04a7
3rd row36aa24ad31241c5d1a180bbe34e1c8c50da40ff7
4th row2ed1cf886d1081de200b0fdd4cb4e28008c7e3d1
5th row12a9b5456f5a4600afeb0ba284ce1098bd35e501
ValueCountFrequency (%)
526468c8fe6fc5157aaf2fce327c5ab2a3350f49 7
 
0.7%
f89a929f09f7a0b0ccd0f4d46dc2b1c562839087 6
 
0.6%
726239d788e3b90cbe4818271ca5361c46d8d246 6
 
0.6%
917e02bc29e0fa06bd8adb25fe5388ac381ec829 6
 
0.6%
689d000f2db8919fd80e0725a1609918ca4a26f4 5
 
0.5%
8f929707147c49606d00386a10161529dad4ec56 4
 
0.4%
6ae8d5c380c1f42094b05d38be26b03650aafb39 4
 
0.4%
5668b5785296b314ea1321057420bcd077dba9ea 3
 
0.3%
0d945044ca3fbf98cad55912584ef80911f330c6 3
 
0.3%
874dad6f34420c014d98eccbe81a061bdc0110cf 3
 
0.3%
Other values (978) 1009
95.5%
2024-10-03T10:56:21.969262image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/

Most occurring characters

ValueCountFrequency (%)
4 2734
 
6.5%
9 2706
 
6.4%
d 2684
 
6.4%
6 2680
 
6.3%
0 2677
 
6.3%
a 2660
 
6.3%
b 2659
 
6.3%
8 2658
 
6.3%
2 2638
 
6.2%
c 2617
 
6.2%
Other values (6) 15527
36.8%

Most occurring categories

ValueCountFrequency (%)
(unknown) 42240
100.0%

Most frequent character per category

(unknown)
ValueCountFrequency (%)
4 2734
 
6.5%
9 2706
 
6.4%
d 2684
 
6.4%
6 2680
 
6.3%
0 2677
 
6.3%
a 2660
 
6.3%
b 2659
 
6.3%
8 2658
 
6.3%
2 2638
 
6.2%
c 2617
 
6.2%
Other values (6) 15527
36.8%

Most occurring scripts

ValueCountFrequency (%)
(unknown) 42240
100.0%

Most frequent character per script

(unknown)
ValueCountFrequency (%)
4 2734
 
6.5%
9 2706
 
6.4%
d 2684
 
6.4%
6 2680
 
6.3%
0 2677
 
6.3%
a 2660
 
6.3%
b 2659
 
6.3%
8 2658
 
6.3%
2 2638
 
6.2%
c 2617
 
6.2%
Other values (6) 15527
36.8%

Most occurring blocks

ValueCountFrequency (%)
(unknown) 42240
100.0%

Most frequent character per block

(unknown)
ValueCountFrequency (%)
4 2734
 
6.5%
9 2706
 
6.4%
d 2684
 
6.4%
6 2680
 
6.3%
0 2677
 
6.3%
a 2660
 
6.3%
b 2659
 
6.3%
8 2658
 
6.3%
2 2638
 
6.2%
c 2617
 
6.2%
Other values (6) 15527
36.8%

max_forks_repo_licenses
Categorical

IMBALANCE 

Distinct28
Distinct (%)2.7%
Missing0
Missing (%)0.0%
Memory size16.5 KiB
MIT
558 
Apache-2.0
279 
BSD-3-Clause
111 
Unlicense
 
22
BSD-2-Clause
 
18
Other values (23)
68 

Length

Max length36
Median length3
Mean length6.4895833
Min length3

Characters and Unicode

Total characters6853
Distinct characters46
Distinct categories1 ?
Distinct scripts1 ?
Distinct blocks1 ?
The Unicode Standard assigns character properties to each code point, which can be used to analyse textual variables.

Unique

Unique9 ?
Unique (%)0.9%

Sample

1st rowMIT
2nd rowMIT
3rd rowApache-2.0
4th rowMIT
5th rowMIT

Common Values

ValueCountFrequency (%)
MIT 558
52.8%
Apache-2.0 279
26.4%
BSD-3-Clause 111
 
10.5%
Unlicense 22
 
2.1%
BSD-2-Clause 18
 
1.7%
ECL-2.0 13
 
1.2%
CC0-1.0 8
 
0.8%
PSF-2.0 6
 
0.6%
CC-BY-4.0 5
 
0.5%
MIT-0 4
 
0.4%
Other values (18) 32
 
3.0%

Length

2024-10-03T10:56:22.059261image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
Histogram of lengths of the category
ValueCountFrequency (%)
mit 558
52.8%
apache-2.0 279
26.4%
bsd-3-clause 111
 
10.5%
unlicense 22
 
2.1%
bsd-2-clause 18
 
1.7%
ecl-2.0 13
 
1.2%
cc0-1.0 8
 
0.8%
psf-2.0 6
 
0.6%
cc-by-4.0 5
 
0.5%
mit-0 4
 
0.4%
Other values (18) 32
 
3.0%

Most occurring characters

ValueCountFrequency (%)
- 628
 
9.2%
T 565
 
8.2%
M 564
 
8.2%
I 562
 
8.2%
e 479
 
7.0%
a 423
 
6.2%
0 338
 
4.9%
. 329
 
4.8%
2 326
 
4.8%
c 312
 
4.6%
Other values (36) 2327
34.0%

Most occurring categories

ValueCountFrequency (%)
(unknown) 6853
100.0%

Most frequent character per category

(unknown)
ValueCountFrequency (%)
- 628
 
9.2%
T 565
 
8.2%
M 564
 
8.2%
I 562
 
8.2%
e 479
 
7.0%
a 423
 
6.2%
0 338
 
4.9%
. 329
 
4.8%
2 326
 
4.8%
c 312
 
4.6%
Other values (36) 2327
34.0%

Most occurring scripts

ValueCountFrequency (%)
(unknown) 6853
100.0%

Most frequent character per script

(unknown)
ValueCountFrequency (%)
- 628
 
9.2%
T 565
 
8.2%
M 564
 
8.2%
I 562
 
8.2%
e 479
 
7.0%
a 423
 
6.2%
0 338
 
4.9%
. 329
 
4.8%
2 326
 
4.8%
c 312
 
4.6%
Other values (36) 2327
34.0%

Most occurring blocks

ValueCountFrequency (%)
(unknown) 6853
100.0%

Most frequent character per block

(unknown)
ValueCountFrequency (%)
- 628
 
9.2%
T 565
 
8.2%
M 564
 
8.2%
I 562
 
8.2%
e 479
 
7.0%
a 423
 
6.2%
0 338
 
4.9%
. 329
 
4.8%
2 326
 
4.8%
c 312
 
4.6%
Other values (36) 2327
34.0%

max_forks_count
Real number (ℝ)

MISSING 

Distinct94
Distinct (%)21.3%
Missing614
Missing (%)58.1%
Infinite0
Infinite (%)0.0%
Mean127.49548
Minimum1
Maximum11956
Zeros0
Zeros (%)0.0%
Negative0
Negative (%)0.0%
Memory size16.5 KiB
2024-10-03T10:56:22.134261image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/

Quantile statistics

Minimum1
5-th percentile1
Q11
median3
Q315
95-th percentile338
Maximum11956
Range11955
Interquartile range (IQR)14

Descriptive statistics

Standard deviation734.3378
Coefficient of variation (CV)5.7597166
Kurtosis160.36243
Mean127.49548
Median Absolute Deviation (MAD)2
Skewness11.271813
Sum56353
Variance539252.01
MonotonicityNot monotonic
2024-10-03T10:56:22.218262image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
Histogram with fixed size bins (bins=50)
ValueCountFrequency (%)
1 135
 
12.8%
2 57
 
5.4%
3 32
 
3.0%
4 23
 
2.2%
6 21
 
2.0%
8 11
 
1.0%
7 10
 
0.9%
9 9
 
0.9%
15 9
 
0.9%
5 8
 
0.8%
Other values (84) 127
 
12.0%
(Missing) 614
58.1%
ValueCountFrequency (%)
1 135
12.8%
2 57
5.4%
3 32
 
3.0%
4 23
 
2.2%
5 8
 
0.8%
6 21
 
2.0%
7 10
 
0.9%
8 11
 
1.0%
9 9
 
0.9%
10 5
 
0.5%
ValueCountFrequency (%)
11956 1
0.1%
4645 1
0.1%
4114 1
0.1%
3240 2
0.2%
3029 1
0.1%
3014 1
0.1%
2928 1
0.1%
2202 1
0.1%
2033 1
0.1%
1297 1
0.1%
Distinct407
Distinct (%)92.1%
Missing614
Missing (%)58.1%
Memory size16.5 KiB
Minimum2015-01-01 10:44:13+00:00
Maximum2022-03-31 02:22:58+00:00
2024-10-03T10:56:22.304262image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:22.391262image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
Histogram with fixed size bins (bins=50)
Distinct407
Distinct (%)92.1%
Missing614
Missing (%)58.1%
Memory size16.5 KiB
Minimum2015-10-29 18:48:59+00:00
Maximum2022-03-31 23:43:06+00:00
2024-10-03T10:56:22.471262image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:22.561261image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
Histogram with fixed size bins (bins=50)
Distinct1000
Distinct (%)94.7%
Missing0
Missing (%)0.0%
Memory size16.5 KiB
2024-10-03T10:56:22.722262image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/

Length

Max length252781
Median length3487.5
Mean length8144.553
Min length10

Characters and Unicode

Total characters8600648
Distinct characters1655
Distinct categories1 ?
Distinct scripts1 ?
Distinct blocks1 ?
The Unicode Standard assigns character properties to each code point, which can be used to analyse textual variables.

Unique

Unique973 ?
Unique (%)92.1%

Sample

1st rowfrom rest_framework_gis import serializers from rest_framework import serializers as s from .models import ( Artificialisee2015to2018, Artificielle2018, CommunesSybarval, CouvertureSol, EnveloppeUrbaine2018, Ocsge, Renaturee2018to2015, Sybarval, Voirie2018, ZonesBaties2018, UsageSol, ) def get_label(code="", label=""): if code is None: code = "-" if label is None: label = "inconnu" return f"{code} {label[:30]}" class Artificialisee2015to2018Serializer(serializers.GeoFeatureModelSerializer): usage_2015 = s.SerializerMethodField() usage_2018 = s.SerializerMethodField() couverture_2015 = s.SerializerMethodField() couverture_2018 = s.SerializerMethodField() def get_usage_2015(self, obj): return get_label(code=obj.us_2015, label=obj.us_2015_label) def get_usage_2018(self, obj): return get_label(code=obj.us_2018, label=obj.us_2018_label) def get_couverture_2015(self, obj): return get_label(code=obj.cs_2015, label=obj.cs_2015_label) def get_couverture_2018(self, obj): return get_label(code=obj.cs_2018, label=obj.cs_2018_label) class Meta: fields = ( "id", "surface", "usage_2015", "usage_2018", "couverture_2015", "couverture_2018", ) geo_field = "mpoly" model = Artificialisee2015to2018 class Artificielle2018Serializer(serializers.GeoFeatureModelSerializer): couverture = s.SerializerMethodField() def get_couverture(self, obj): return get_label(code=obj.couverture, label=obj.couverture_label) class Meta: fields = ( "id", "surface", "couverture", ) geo_field = "mpoly" model = Artificielle2018 class CommunesSybarvalSerializer(serializers.GeoFeatureModelSerializer): """Marker GeoJSON serializer.""" class Meta: """Marker serializer meta class.""" fields = ( "nom", "code_insee", "surface", ) geo_field = "mpoly" model = CommunesSybarval class EnveloppeUrbaine2018Serializer(serializers.GeoFeatureModelSerializer): couverture = s.SerializerMethodField() def get_couverture(self, obj): return get_label(code=obj.couverture, label=obj.couverture_label) class Meta: fields = ( "id", "couverture", "surface", ) geo_field = "mpoly" model = EnveloppeUrbaine2018 class OcsgeSerializer(serializers.GeoFeatureModelSerializer): couverture = s.SerializerMethodField() usage = s.SerializerMethodField() def get_couverture(self, obj): return get_label(code=obj.couverture, label=obj.couverture_label) def get_usage(self, obj): return get_label(code=obj.usage, label=obj.usage_label) class Meta: fields = ( "id", "couverture", "usage", "millesime", "map_color", "year", ) geo_field = "mpoly" model = Ocsge class Renaturee2018to2015Serializer(serializers.GeoFeatureModelSerializer): usage_2015 = s.SerializerMethodField() usage_2018 = s.SerializerMethodField() couverture_2015 = s.SerializerMethodField() couverture_2018 = s.SerializerMethodField() def get_usage_2015(self, obj): return get_label(code=obj.us_2015, label=obj.us_2015_label) def get_usage_2018(self, obj): return get_label(code=obj.us_2018, label=obj.us_2018_label) def get_couverture_2015(self, obj): return get_label(code=obj.cs_2015, label=obj.cs_2015_label) def get_couverture_2018(self, obj): return get_label(code=obj.cs_2018, label=obj.cs_2018_label) class Meta: fields = ( "id", "surface", "usage_2015", "usage_2018", "couverture_2015", "couverture_2018", ) geo_field = "mpoly" model = Renaturee2018to2015 class SybarvalSerializer(serializers.GeoFeatureModelSerializer): class Meta: fields = ( "id", "surface", ) geo_field = "mpoly" model = Sybarval class Voirie2018Serializer(serializers.GeoFeatureModelSerializer): couverture = s.SerializerMethodField() usage = s.SerializerMethodField() def get_couverture(self, obj): return get_label(code=obj.couverture, label=obj.couverture_label) def get_usage(self, obj): return get_label(code=obj.usage, label=obj.usage_label) class Meta: fields = ( "id", "surface", "couverture", "usage", ) geo_field = "mpoly" model = Voirie2018 class ZonesBaties2018Serializer(serializers.GeoFeatureModelSerializer): couverture = s.SerializerMethodField() usage = s.SerializerMethodField() def get_couverture(self, obj): return get_label(code=obj.couverture, label=obj.couverture_label) def get_usage(self, obj): return get_label(code=obj.usage, label=obj.usage_label) class Meta: fields = ( "id", "couverture", "usage", "surface", ) geo_field = "mpoly" model = ZonesBaties2018 class CouvertureSolSerializer(serializers.ModelSerializer): class Meta: fields = ( "id", "parent", "code", "label", "is_artificial", ) model = CouvertureSol class UsageSolSerializer(serializers.ModelSerializer): class Meta: fields = ( "id", "parent", "code", "label", ) model = UsageSol
2nd rowfrom django.contrib import admin from .models import SearchResult # Register your models here. class SearchResultAdmin(admin.ModelAdmin): fields = ["query", "heading", "url", "text"] admin.site.register(SearchResult, SearchResultAdmin)
3rd rowimport asyncio import os import tempfile from contextlib import ExitStack from typing import Text, Optional, List, Union, Dict from rasa.importers.importer import TrainingDataImporter from rasa import model from rasa.model import FingerprintComparisonResult from rasa.core.domain import Domain from rasa.utils.common import TempDirectoryPath from rasa.cli.utils import ( print_success, print_warning, print_error, bcolors, print_color, ) from rasa.constants import DEFAULT_MODELS_PATH, DEFAULT_CORE_SUBDIRECTORY_NAME def train( domain: Text, config: Text, training_files: Union[Text, List[Text]], output: Text = DEFAULT_MODELS_PATH, force_training: bool = False, fixed_model_name: Optional[Text] = None, persist_nlu_training_data: bool = False, additional_arguments: Optional[Dict] = None, loop: Optional[asyncio.AbstractEventLoop] = None, ) -> Optional[Text]: if loop is None: try: loop = asyncio.get_event_loop() except RuntimeError: loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) return loop.run_until_complete( train_async( domain=domain, config=config, training_files=training_files, output_path=output, force_training=force_training, fixed_model_name=fixed_model_name, persist_nlu_training_data=persist_nlu_training_data, additional_arguments=additional_arguments, ) ) async def train_async( domain: Union[Domain, Text], config: Dict[Text, Text], training_files: Optional[Union[Text, List[Text]]], output_path: Text = DEFAULT_MODELS_PATH, force_training: bool = False, fixed_model_name: Optional[Text] = None, persist_nlu_training_data: bool = False, additional_arguments: Optional[Dict] = None, ) -> Optional[Text]: """Trains a Rasa model (Core and NLU). Args: domain: Path to the domain file. config: Dict of paths to the config for Core and NLU. Keys are language codes training_files: Paths to the training data for Core and NLU. output_path: Output path. force_training: If `True` retrain model even if data has not changed. fixed_model_name: Name of model to be stored. persist_nlu_training_data: `True` if the NLU training data should be persisted with the model. additional_arguments: Additional training parameters. Returns: Path of the trained model archive. """ # file_importer = TrainingDataImporter.load_from_config( # config, domain, training_files # ) with ExitStack() as stack: train_path = stack.enter_context(TempDirectoryPath(tempfile.mkdtemp())) # bf mod from rasa_addons.importers import BotfrontFileImporter file_importer = BotfrontFileImporter(config, domain, training_files) # domain = await file_importer.get_domain() # if domain.is_empty(): # return await handle_domain_if_not_exists( # file_importer, output_path, fixed_model_name # ) # /bf mod return await _train_async_internal( file_importer, train_path, output_path, force_training, fixed_model_name, persist_nlu_training_data, additional_arguments, ) async def handle_domain_if_not_exists( file_importer: TrainingDataImporter, output_path, fixed_model_name ): nlu_model_only = await _train_nlu_with_validated_data( file_importer, output=output_path, fixed_model_name=fixed_model_name ) print_warning( "Core training was skipped because no valid domain file was found. Only an nlu-model was created." "Please specify a valid domain using '--domain' argument or check if the provided domain file exists." ) return nlu_model_only async def _train_async_internal( file_importer: TrainingDataImporter, train_path: Text, output_path: Text, force_training: bool, fixed_model_name: Optional[Text], persist_nlu_training_data: bool, additional_arguments: Optional[Dict], ) -> Optional[Text]: """Trains a Rasa model (Core and NLU). Use only from `train_async`. Args: file_importer: `TrainingDataImporter` which supplies the training data. train_path: Directory in which to train the model. output_path: Output path. force_training: If `True` retrain model even if data has not changed. persist_nlu_training_data: `True` if the NLU training data should be persisted with the model. fixed_model_name: Name of model to be stored. additional_arguments: Additional training parameters. Returns: Path of the trained model archive. """ stories, nlu_data = await asyncio.gather( file_importer.get_stories(), file_importer.get_nlu_data() ) # if stories.is_empty() and nlu_data.is_empty(): # print_error( # "No training data given. Please provide stories and NLU data in " # "order to train a Rasa model using the '--data' argument." # ) # return # if nlu_data.is_empty(): # print_warning("No NLU data present. Just a Rasa Core model will be trained.") # return await _train_core_with_validated_data( # file_importer, # output=output_path, # fixed_model_name=fixed_model_name, # additional_arguments=additional_arguments, # ) new_fingerprint = await model.model_fingerprint(file_importer) old_model = model.get_latest_model(output_path) fingerprint_comparison = FingerprintComparisonResult(force_training=force_training) if not force_training: fingerprint_comparison = model.should_retrain( new_fingerprint, old_model, train_path ) # bf mod > if fingerprint_comparison.nlu == True: # replace True with list of all langs fingerprint_comparison.nlu = list(new_fingerprint.get("nlu-config", {}).keys()) domain = await file_importer.get_domain() core_untrainable = domain.is_empty() or stories.is_empty() nlu_untrainable = [l for l, d in nlu_data.items() if d.is_empty()] fingerprint_comparison.core = fingerprint_comparison.core and not core_untrainable fingerprint_comparison.nlu = [l for l in fingerprint_comparison.nlu if l not in nlu_untrainable] if core_untrainable: print_color("Skipping Core training since domain or stories are empty.", color=bcolors.OKBLUE) for lang in nlu_untrainable: print_color("No NLU data found for language <{}>, skipping training...".format(lang), color=bcolors.OKBLUE) # </ bf mod if fingerprint_comparison.is_training_required(): await _do_training( file_importer, output_path=output_path, train_path=train_path, fingerprint_comparison_result=fingerprint_comparison, fixed_model_name=fixed_model_name, persist_nlu_training_data=persist_nlu_training_data, additional_arguments=additional_arguments, ) return model.package_model( fingerprint=new_fingerprint, output_directory=output_path, train_path=train_path, fixed_model_name=fixed_model_name, ) print_success( "Nothing changed. You can use the old model stored at '{}'." "".format(os.path.abspath(old_model)) ) return old_model async def _do_training( file_importer: TrainingDataImporter, output_path: Text, train_path: Text, fingerprint_comparison_result: Optional[FingerprintComparisonResult] = None, fixed_model_name: Optional[Text] = None, persist_nlu_training_data: bool = False, additional_arguments: Optional[Dict] = None, ): if not fingerprint_comparison_result: fingerprint_comparison_result = FingerprintComparisonResult() if fingerprint_comparison_result.should_retrain_core(): await _train_core_with_validated_data( file_importer, output=output_path, train_path=train_path, fixed_model_name=fixed_model_name, additional_arguments=additional_arguments, ) elif fingerprint_comparison_result.should_retrain_nlg(): print_color( "Core stories/configuration did not change. " "Only the templates section has been changed. A new model with " "the updated templates will be created.", color=bcolors.OKBLUE, ) await model.update_model_with_new_domain(file_importer, train_path) else: print_color( "Core stories/configuration did not change. No need to retrain Core model.", color=bcolors.OKBLUE, ) if fingerprint_comparison_result.should_retrain_nlu(): await _train_nlu_with_validated_data( file_importer, output=output_path, train_path=train_path, fixed_model_name=fixed_model_name, retrain_nlu=fingerprint_comparison_result.nlu, persist_nlu_training_data=persist_nlu_training_data, ) else: print_color( "NLU data/configuration did not change. No need to retrain NLU model.", color=bcolors.OKBLUE, ) def train_core( domain: Union[Domain, Text], config: Text, stories: Text, output: Text, train_path: Optional[Text] = None, fixed_model_name: Optional[Text] = None, additional_arguments: Optional[Dict] = None, ) -> Optional[Text]: loop = asyncio.get_event_loop() return loop.run_until_complete( train_core_async( domain=domain, config=config, stories=stories, output=output, train_path=train_path, fixed_model_name=fixed_model_name, additional_arguments=additional_arguments, ) ) async def train_core_async( domain: Union[Domain, Text], config: Text, stories: Text, output: Text, train_path: Optional[Text] = None, fixed_model_name: Optional[Text] = None, additional_arguments: Optional[Dict] = None, ) -> Optional[Text]: """Trains a Core model. Args: domain: Path to the domain file. config: Path to the config file for Core. stories: Path to the Core training data. output: Output path. train_path: If `None` the model will be trained in a temporary directory, otherwise in the provided directory. fixed_model_name: Name of model to be stored. uncompress: If `True` the model will not be compressed. additional_arguments: Additional training parameters. Returns: If `train_path` is given it returns the path to the model archive, otherwise the path to the directory with the trained model files. """ file_importer = TrainingDataImporter.load_core_importer_from_config( config, domain, [stories] ) domain = await file_importer.get_domain() if domain.is_empty(): print_error( "Core training was skipped because no valid domain file was found. " "Please specify a valid domain using '--domain' argument or check if the provided domain file exists." ) return None if not await file_importer.get_stories(): print_error( "No stories given. Please provide stories in order to " "train a Rasa Core model using the '--stories' argument." ) return return await _train_core_with_validated_data( file_importer, output=output, train_path=train_path, fixed_model_name=fixed_model_name, additional_arguments=additional_arguments, ) async def _train_core_with_validated_data( file_importer: TrainingDataImporter, output: Text, train_path: Optional[Text] = None, fixed_model_name: Optional[Text] = None, additional_arguments: Optional[Dict] = None, ) -> Optional[Text]: """Train Core with validated training and config data.""" import rasa.core.train with ExitStack() as stack: if train_path: # If the train path was provided, do nothing on exit. _train_path = train_path else: # Otherwise, create a temp train path and clean it up on exit. _train_path = stack.enter_context(TempDirectoryPath(tempfile.mkdtemp())) # normal (not compare) training print_color("Training Core model...", color=bcolors.OKBLUE) domain, config = await asyncio.gather( file_importer.get_domain(), file_importer.get_config() ) await rasa.core.train( domain_file=domain, training_resource=file_importer, output_path=os.path.join(_train_path, DEFAULT_CORE_SUBDIRECTORY_NAME), policy_config=config, additional_arguments=additional_arguments, ) print_color("Core model training completed.", color=bcolors.OKBLUE) if train_path is None: # Only Core was trained. new_fingerprint = await model.model_fingerprint(file_importer) return model.package_model( fingerprint=new_fingerprint, output_directory=output, train_path=_train_path, fixed_model_name=fixed_model_name, model_prefix="core-", ) return _train_path def train_nlu( config: Text, nlu_data: Text, output: Text, train_path: Optional[Text] = None, fixed_model_name: Optional[Text] = None, persist_nlu_training_data: bool = False, ) -> Optional[Text]: """Trains an NLU model. Args: config: Path to the config file for NLU. nlu_data: Path to the NLU training data. output: Output path. train_path: If `None` the model will be trained in a temporary directory, otherwise in the provided directory. fixed_model_name: Name of the model to be stored. persist_nlu_training_data: `True` if the NLU training data should be persisted with the model. Returns: If `train_path` is given it returns the path to the model archive, otherwise the path to the directory with the trained model files. """ loop = asyncio.get_event_loop() return loop.run_until_complete( _train_nlu_async( config, nlu_data, output, train_path, fixed_model_name, persist_nlu_training_data, ) ) async def _train_nlu_async( config: Text, nlu_data: Text, output: Text, train_path: Optional[Text] = None, fixed_model_name: Optional[Text] = None, persist_nlu_training_data: bool = False, ): if not nlu_data: print_error( "No NLU data given. Please provide NLU data in order to train " "a Rasa NLU model using the '--nlu' argument." ) return # training NLU only hence the training files still have to be selected file_importer = TrainingDataImporter.load_nlu_importer_from_config( config, training_data_paths=[nlu_data] ) training_datas = await file_importer.get_nlu_data() if training_datas.is_empty(): print_error( f"Path '{nlu_data}' doesn't contain valid NLU data in it. " "Please verify the data format. " "The NLU model training will be skipped now." ) return return await _train_nlu_with_validated_data( file_importer, output=output, train_path=train_path, fixed_model_name=fixed_model_name, persist_nlu_training_data=persist_nlu_training_data, ) async def _train_nlu_with_validated_data( file_importer: TrainingDataImporter, output: Text, train_path: Optional[Text] = None, fixed_model_name: Optional[Text] = None, persist_nlu_training_data: bool = False, retrain_nlu: Union[bool, List[Text]] = True ) -> Optional[Text]: """Train NLU with validated training and config data.""" import rasa.nlu.train with ExitStack() as stack: models = {} from rasa.nlu import config as cfg_loader if train_path: # If the train path was provided, do nothing on exit. _train_path = train_path else: # Otherwise, create a temp train path and clean it up on exit. _train_path = stack.enter_context(TempDirectoryPath(tempfile.mkdtemp())) # bf mod config = await file_importer.get_nlu_config(retrain_nlu) for lang in config: if config[lang]: print_color("Start training {} NLU model ...".format(lang), color=bcolors.OKBLUE) _, models[lang], _ = await rasa.nlu.train( config[lang], file_importer, _train_path, fixed_model_name="nlu-{}".format(lang), persist_nlu_training_data=persist_nlu_training_data, ) else: print_color("NLU data for language <{}> didn't change, skipping training...".format(lang), color=bcolors.OKBLUE) # /bf mod print_color("NLU model training completed.", color=bcolors.OKBLUE) if train_path is None: # Only NLU was trained new_fingerprint = await model.model_fingerprint(file_importer) return model.package_model( fingerprint=new_fingerprint, output_directory=output, train_path=_train_path, fixed_model_name=fixed_model_name, model_prefix="nlu-", ) return _train_path
4th rowclass Solution: def finalPrices(self, prices: List[int]) -> List[int]: res = [] for i in range(len(prices)): for j in range(i+1,len(prices)): if prices[j]<=prices[i]: res.append(prices[i]-prices[j]) break if j==len(prices)-1: res.append(prices[i]) res.append(prices[-1]) return res
5th row# ============================================================================ # FILE: default.py # AUTHOR: Shougo Matsushita <Shougo.Matsu at gmail.com> # License: MIT license # ============================================================================ import re import typing from denite.util import echo, error, clearmatch, regex_convert_py_vim from denite.util import Nvim, UserContext, Candidates, Candidate from denite.parent import SyncParent class Default(object): @property def is_async(self) -> bool: return self._is_async def __init__(self, vim: Nvim) -> None: self._vim = vim self._denite: typing.Optional[SyncParent] = None self._selected_candidates: typing.List[int] = [] self._candidates: Candidates = [] self._cursor = 0 self._entire_len = 0 self._result: typing.List[typing.Any] = [] self._context: UserContext = {} self._bufnr = -1 self._winid = -1 self._winrestcmd = '' self._initialized = False self._winheight = 0 self._winwidth = 0 self._winminheight = -1 self._is_multi = False self._is_async = False self._matched_pattern = '' self._displayed_texts: typing.List[str] = [] self._statusline_sources = '' self._titlestring = '' self._ruler = False self._prev_action = '' self._prev_status: typing.Dict[str, typing.Any] = {} self._prev_curpos: typing.List[typing.Any] = [] self._save_window_options: typing.Dict[str, typing.Any] = {} self._sources_history: typing.List[typing.Any] = [] self._previous_text = '' self._floating = False self._filter_floating = False self._updated = False self._timers: typing.Dict[str, int] = {} self._matched_range_id = -1 self._matched_char_id = -1 self._check_matchdelete = bool(self._vim.call( 'denite#util#check_matchdelete')) def start(self, sources: typing.List[typing.Any], context: UserContext) -> typing.List[typing.Any]: if not self._denite: # if hasattr(self._vim, 'run_coroutine'): # self._denite = ASyncParent(self._vim) # else: self._denite = SyncParent(self._vim) self._result = [] context['sources_queue'] = [sources] self._start_sources_queue(context) return self._result def do_action(self, action_name: str, command: str = '', is_manual: bool = False) -> None: if is_manual: candidates = self._get_selected_candidates() elif self._get_cursor_candidate(): candidates = [self._get_cursor_candidate()] else: candidates = [] if not self._denite or not candidates or not action_name: return self._prev_action = action_name action = self._denite.get_action( self._context, action_name, candidates) if not action: return post_action = self._context['post_action'] is_quit = action['is_quit'] or post_action == 'quit' if is_quit: self.quit() self._denite.do_action(self._context, action_name, candidates) self._result = candidates if command != '': self._vim.command(command) if is_quit and post_action == 'open': # Re-open denite buffer prev_cursor = self._cursor cursor_candidate = self._get_cursor_candidate() self._init_buffer() self.redraw(False) if cursor_candidate == self._get_candidate(prev_cursor): # Restore the cursor self._move_to_pos(prev_cursor) # Disable quit flag is_quit = False if not is_quit and is_manual: self._selected_candidates = [] self.redraw(action['is_redraw']) if is_manual and self._context['sources_queue']: self._context['input'] = '' self._context['quick_move'] = '' self._start_sources_queue(self._context) return def redraw(self, is_force: bool = True) -> None: self._context['is_redraw'] = is_force if is_force: self._gather_candidates() if self._update_candidates(): self._update_buffer() else: self._update_status() self._context['is_redraw'] = False def quit(self) -> None: if self._denite: self._denite.on_close(self._context) self._quit_buffer() self._result = [] return def _restart(self) -> None: self._context['input'] = '' self._quit_buffer() self._init_denite() self._gather_candidates() self._init_buffer() self._update_candidates() self._update_buffer() def _start_sources_queue(self, context: UserContext) -> None: if not context['sources_queue']: return self._sources_history.append({ 'sources': context['sources_queue'][0], 'path': context['path'], }) self._start(context['sources_queue'][0], context) if context['sources_queue']: context['sources_queue'].pop(0) context['path'] = self._context['path'] def _start(self, sources: typing.List[typing.Any], context: UserContext) -> None: from denite.ui.map import do_map self._vim.command('silent! autocmd! denite') if re.search(r'\[Command Line\]$', self._vim.current.buffer.name): # Ignore command line window. return resume = self._initialized and context['resume'] if resume: # Skip the initialization update = ('immediately', 'immediately_1', 'cursor_pos', 'prev_winid', 'start_filter', 'quick_move') for key in update: self._context[key] = context[key] self._check_move_option() if self._check_do_option(): return self._init_buffer() if context['refresh']: self.redraw() self._move_to_pos(self._cursor) else: if self._context != context: self._context.clear() self._context.update(context) self._context['sources'] = sources self._context['is_redraw'] = False self._is_multi = len(sources) > 1 if not sources: # Ignore empty sources. error(self._vim, 'Empty sources') return self._init_denite() self._gather_candidates() self._update_candidates() self._init_cursor() self._check_move_option() if self._check_do_option(): return self._init_buffer() self._update_displayed_texts() self._update_buffer() self._move_to_pos(self._cursor) if self._context['quick_move'] and do_map(self, 'quick_move', []): return if self._context['start_filter']: do_map(self, 'open_filter_buffer', []) def _init_buffer(self) -> None: self._prev_status = dict() self._displayed_texts = [] self._prev_bufnr = self._vim.current.buffer.number self._prev_curpos = self._vim.call('getcurpos') self._prev_wininfo = self._get_wininfo() self._prev_winid = self._context['prev_winid'] self._winrestcmd = self._vim.call('winrestcmd') self._ruler = self._vim.options['ruler'] self._switch_buffer() self._bufnr = self._vim.current.buffer.number self._winid = self._vim.call('win_getid') self._resize_buffer(True) self._winheight = self._vim.current.window.height self._winwidth = self._vim.current.window.width self._bufvars = self._vim.current.buffer.vars self._bufvars['denite'] = { 'buffer_name': self._context['buffer_name'], } self._bufvars['denite_statusline'] = {} self._vim.vars['denite#_previewed_buffers'] = {} self._save_window_options = {} window_options = { 'colorcolumn', 'concealcursor', 'conceallevel', 'cursorcolumn', 'cursorline', 'foldcolumn', 'foldenable', 'list', 'number', 'relativenumber', 'signcolumn', 'spell', 'winfixheight', 'wrap', } for k in window_options: self._save_window_options[k] = self._vim.current.window.options[k] # Note: Have to use setlocal instead of "current.window.options" # "current.window.options" changes global value instead of local in # neovim. self._vim.command('setlocal colorcolumn=') self._vim.command('setlocal conceallevel=3') self._vim.command('setlocal concealcursor=inv') self._vim.command('setlocal nocursorcolumn') self._vim.command('setlocal nofoldenable') self._vim.command('setlocal foldcolumn=0') self._vim.command('setlocal nolist') self._vim.command('setlocal nonumber') self._vim.command('setlocal norelativenumber') self._vim.command('setlocal nospell') self._vim.command('setlocal winfixheight') self._vim.command('setlocal nowrap') if self._context['prompt']: self._vim.command('setlocal signcolumn=yes') else: self._vim.command('setlocal signcolumn=auto') if self._context['cursorline']: self._vim.command('setlocal cursorline') options = self._vim.current.buffer.options if self._floating: # Disable ruler self._vim.options['ruler'] = False options['buftype'] = 'nofile' options['bufhidden'] = 'delete' options['swapfile'] = False options['buflisted'] = False options['modeline'] = False options['modifiable'] = False options['filetype'] = 'denite' if self._vim.call('exists', '#WinEnter'): self._vim.command('doautocmd WinEnter') if self._vim.call('exists', '#BufWinEnter'): self._vim.command('doautocmd BufWinEnter') if not self._vim.call('has', 'nvim'): # In Vim8, FileType autocmd is not fired after set filetype option. self._vim.command('silent doautocmd FileType denite') if self._context['auto_action']: self._vim.command('autocmd denite ' 'CursorMoved <buffer> ' 'call denite#call_map("auto_action")') self._init_syntax() def _switch_buffer(self) -> None: split = self._context['split'] if (split != 'no' and self._winid > 0 and self._vim.call('win_gotoid', self._winid)): if split != 'vertical' and not self._floating: # Move the window to bottom self._vim.command('wincmd J') self._winrestcmd = '' return self._floating = split in [ 'floating', 'floating_relative_cursor', 'floating_relative_window', ] self._filter_floating = False if self._vim.current.buffer.options['filetype'] != 'denite': self._titlestring = self._vim.options['titlestring'] command = 'edit' if split == 'tab': self._vim.command('tabnew') elif self._floating: self._split_floating(split) elif self._context['filter_split_direction'] == 'floating': self._filter_floating = True elif split != 'no': command = self._get_direction() command += ' vsplit' if split == 'vertical' else ' split' bufname = '[denite]-' + self._context['buffer_name'] if self._vim.call('exists', '*bufadd'): bufnr = self._vim.call('bufadd', bufname) vertical = 'vertical' if split == 'vertical' else '' command = ( 'buffer' if split in ['no', 'tab', 'floating', 'floating_relative_window', 'floating_relative_cursor'] else 'sbuffer') self._vim.command( 'silent keepalt %s %s %s %s' % ( self._get_direction(), vertical, command, bufnr, ) ) else: self._vim.call( 'denite#util#execute_path', f'silent keepalt {command}', bufname) def _get_direction(self) -> str: direction = str(self._context['direction']) if direction == 'dynamictop' or direction == 'dynamicbottom': self._update_displayed_texts() winwidth = self._vim.call('winwidth', 0) is_fit = not [x for x in self._displayed_texts if self._vim.call('strwidth', x) > winwidth] if direction == 'dynamictop': direction = 'aboveleft' if is_fit else 'topleft' else: direction = 'belowright' if is_fit else 'botright' return direction def _get_wininfo(self) -> typing.List[typing.Any]: return [ self._vim.options['columns'], self._vim.options['lines'], self._vim.call('win_getid'), self._vim.call('tabpagebuflist') ] def _switch_prev_buffer(self) -> None: if (self._prev_bufnr == self._bufnr or self._vim.buffers[self._prev_bufnr].name == ''): self._vim.command('enew') else: self._vim.command('buffer ' + str(self._prev_bufnr)) def _init_syntax(self) -> None: self._vim.command('syntax case ignore') self._vim.command('highlight default link deniteInput ModeMsg') self._vim.command('highlight link deniteMatchedRange ' + self._context['highlight_matched_range']) self._vim.command('highlight link deniteMatchedChar ' + self._context['highlight_matched_char']) self._vim.command('highlight default link ' + 'deniteStatusLinePath Comment') self._vim.command('highlight default link ' + 'deniteStatusLineNumber LineNR') self._vim.command('highlight default link ' + 'deniteSelectedLine Statement') if self._floating: self._vim.current.window.options['winhighlight'] = ( 'Normal:' + self._context['highlight_window_background'] ) self._vim.command(('syntax match deniteSelectedLine /^[%s].*/' + ' contains=deniteConcealedMark') % ( self._context['selected_icon'])) self._vim.command(('syntax match deniteConcealedMark /^[ %s]/' + ' conceal contained') % ( self._context['selected_icon'])) if self._denite: self._denite.init_syntax(self._context, self._is_multi) def _update_candidates(self) -> bool: if not self._denite: return False [self._is_async, pattern, statuses, self._entire_len, self._candidates] = self._denite.filter_candidates(self._context) prev_displayed_texts = self._displayed_texts self._update_displayed_texts() prev_matched_pattern = self._matched_pattern self._matched_pattern = pattern prev_statusline_sources = self._statusline_sources self._statusline_sources = ' '.join(statuses) if self._is_async: self._start_timer('update_candidates') else: self._stop_timer('update_candidates') updated = (self._displayed_texts != prev_displayed_texts or self._matched_pattern != prev_matched_pattern or self._statusline_sources != prev_statusline_sources) if updated: self._updated = True self._start_timer('update_buffer') if self._context['search'] and self._context['input']: self._vim.call('setreg', '/', self._context['input']) return self._updated def _update_displayed_texts(self) -> None: candidates_len = len(self._candidates) if not self._is_async and self._context['auto_resize']: winminheight = self._context['winminheight'] max_height = min(self._context['winheight'], self._get_max_height()) if (winminheight != -1 and candidates_len < winminheight): self._winheight = winminheight elif candidates_len > max_height: self._winheight = max_height elif candidates_len != self._winheight: self._winheight = candidates_len max_source_name_len = 0 if self._candidates: max_source_name_len = max([ len(self._get_display_source_name(x['source_name'])) for x in self._candidates]) self._context['max_source_name_len'] = max_source_name_len self._context['max_source_name_format'] = ( '{:<' + str(self._context['max_source_name_len']) + '}') self._displayed_texts = [ self._get_candidate_display_text(i) for i in range(0, candidates_len) ] def _update_buffer(self) -> None: is_current_buffer = self._bufnr == self._vim.current.buffer.number self._update_status() if self._check_matchdelete and self._context['match_highlight']: matches = [x['id'] for x in self._vim.call('getmatches', self._winid)] if self._matched_range_id in matches: self._vim.call('matchdelete', self._matched_range_id, self._winid) self._matched_range_id = -1 if self._matched_char_id in matches: self._vim.call('matchdelete', self._matched_char_id, self._winid) self._matched_char_id = -1 if self._matched_pattern != '': self._matched_range_id = self._vim.call( 'matchadd', 'deniteMatchedRange', r'\c' + regex_convert_py_vim(self._matched_pattern), 10, -1, {'window': self._winid}) matched_char_pattern = '[{}]'.format(re.sub( r'([\[\]\\^-])', r'\\\1', self._context['input'].replace(' ', '') )) self._matched_char_id = self._vim.call( 'matchadd', 'deniteMatchedChar', matched_char_pattern, 10, -1, {'window': self._winid}) prev_linenr = self._vim.call('line', '.') prev_candidate = self._get_cursor_candidate() buffer = self._vim.buffers[self._bufnr] buffer.options['modifiable'] = True self._vim.vars['denite#_candidates'] = [ x['word'] for x in self._candidates] buffer[:] = self._displayed_texts buffer.options['modifiable'] = False self._previous_text = self._context['input'] self._resize_buffer(is_current_buffer) is_changed = (self._context['reversed'] or (is_current_buffer and self._previous_text != self._context['input'])) if self._updated and is_changed: if not is_current_buffer: save_winid = self._vim.call('win_getid') self._vim.call('win_gotoid', self._winid) self._init_cursor() self._move_to_pos(self._cursor) if not is_current_buffer: self._vim.call('win_gotoid', save_winid) elif is_current_buffer: self._vim.call('cursor', [prev_linenr, 0]) if is_current_buffer: if (self._context['auto_action'] and prev_candidate != self._get_cursor_candidate()): self.do_action(self._context['auto_action']) self._updated = False self._stop_timer('update_buffer') def _update_status(self) -> None: inpt = '' if self._context['input']: inpt = self._context['input'] + ' ' if self._context['error_messages']: inpt = '[ERROR] ' + inpt path = '[' + self._context['path'] + ']' status = { 'input': inpt, 'sources': self._statusline_sources, 'path': path, # Extra 'buffer_name': self._context['buffer_name'], 'line_total': len(self._candidates), } if status == self._prev_status: return self._bufvars['denite_statusline'] = status self._prev_status = status linenr = "printf('%'.(len(line('$'))+2).'d/%d',line('.'),line('$'))" if self._context['statusline']: if self._floating or self._filter_floating: self._vim.options['titlestring'] = ( "%{denite#get_status('input')}%* " + "%{denite#get_status('sources')} " + " %{denite#get_status('path')}%*" + "%{" + linenr + "}%*") else: winnr = self._vim.call('win_id2win', self._winid) self._vim.call('setwinvar', winnr, '&statusline', ( "%#deniteInput#%{denite#get_status('input')}%* " + "%{denite#get_status('sources')} %=" + "%#deniteStatusLinePath# %{denite#get_status('path')}%*" + "%#deniteStatusLineNumber#%{" + linenr + "}%*")) def _get_display_source_name(self, name: str) -> str: source_names = self._context['source_names'] if not self._is_multi or source_names == 'hide': source_name = '' else: short_name = (re.sub(r'([a-zA-Z])[a-zA-Z]+', r'\1', name) if re.search(r'[^a-zA-Z]', name) else name[:2]) source_name = short_name if source_names == 'short' else name return source_name def _get_candidate_display_text(self, index: int) -> str: source_names = self._context['source_names'] candidate = self._candidates[index] terms = [] if self._is_multi and source_names != 'hide': terms.append(self._context['max_source_name_format'].format( self._get_display_source_name(candidate['source_name']))) encoding = self._context['encoding'] abbr = candidate.get('abbr', candidate['word']).encode( encoding, errors='replace').decode(encoding, errors='replace') terms.append(abbr[:int(self._context['max_candidate_width'])]) return (str(self._context['selected_icon']) if index in self._selected_candidates else ' ') + ' '.join(terms).replace('\n', '') def _get_max_height(self) -> int: return int(self._vim.options['lines']) if not self._floating else ( int(self._vim.options['lines']) - int(self._context['winrow']) - int(self._vim.options['cmdheight'])) def _resize_buffer(self, is_current_buffer: bool) -> None: split = self._context['split'] if (split == 'no' or split == 'tab' or self._vim.call('winnr', '$') == 1): return winheight = max(self._winheight, 1) winwidth = max(self._winwidth, 1) is_vertical = split == 'vertical' if not is_current_buffer: restore = self._vim.call('win_getid') self._vim.call('win_gotoid', self._winid) if not is_vertical and self._vim.current.window.height != winheight: if self._floating: wincol = self._context['winrow'] row = wincol if split == 'floating': if self._context['auto_resize'] and row > 1: row += self._context['winheight'] row -= self._winheight self._vim.call('nvim_win_set_config', self._winid, { 'relative': 'editor', 'row': row, 'col': self._context['wincol'], 'width': winwidth, 'height': winheight, }) filter_row = 0 if wincol == 1 else row + winheight filter_col = self._context['wincol'] else: init_pos = self._vim.call('nvim_win_get_config', self._winid) self._vim.call('nvim_win_set_config', self._winid, { 'relative': 'win', 'win': init_pos['win'], 'row': init_pos['row'], 'col': init_pos['col'], 'width': winwidth, 'height': winheight, }) filter_col = init_pos['col'] if init_pos['anchor'] == 'NW': winpos = self._vim.call('nvim_win_get_position', self._winid) filter_row = winpos[0] + winheight filter_winid = self._vim.vars['denite#_filter_winid'] self._context['filter_winrow'] = row if self._vim.call('win_id2win', filter_winid) > 0: self._vim.call('nvim_win_set_config', filter_winid, { 'relative': 'editor', 'row': filter_row, 'col': filter_col, }) self._vim.command('resize ' + str(winheight)) if self._context['reversed']: self._vim.command('normal! zb') elif is_vertical and self._vim.current.window.width != winwidth: self._vim.command('vertical resize ' + str(winwidth)) if not is_current_buffer: self._vim.call('win_gotoid', restore) def _check_do_option(self) -> bool: if self._context['do'] != '': self._do_command(self._context['do']) return True elif (self._candidates and self._context['immediately'] or len(self._candidates) == 1 and self._context['immediately_1']): self._do_immediately() return True return not (self._context['empty'] or self._is_async or self._candidates) def _check_move_option(self) -> None: if self._context['cursor_pos'].isnumeric(): self._cursor = int(self._context['cursor_pos']) + 1 elif re.match(r'\+\d+', self._context['cursor_pos']): for _ in range(int(self._context['cursor_pos'][1:])): self._move_to_next_line() elif re.match(r'-\d+', self._context['cursor_pos']): for _ in range(int(self._context['cursor_pos'][1:])): self._move_to_prev_line() elif self._context['cursor_pos'] == '$': self._move_to_last_line() def _do_immediately(self) -> None: goto = self._winid > 0 and self._vim.call( 'win_gotoid', self._winid) if goto: # Jump to denite window self._init_buffer() self.do_action('default') candidate = self._get_cursor_candidate() if not candidate: return echo(self._vim, 'Normal', '[{}/{}] {}'.format( self._cursor, len(self._candidates), candidate.get('abbr', candidate['word']))) if goto: # Move to the previous window self._vim.command('wincmd p') def _do_command(self, command: str) -> None: self._init_cursor() cursor = 1 while cursor < len(self._candidates): self.do_action('default', command) self._move_to_next_line() self._quit_buffer() def _cleanup(self) -> None: self._stop_timer('update_candidates') self._stop_timer('update_buffer') if self._vim.current.buffer.number == self._bufnr: self._cursor = self._vim.call('line', '.') # Note: Close filter window before preview window self._vim.call('denite#filter#_close_filter_window') if not self._context['has_preview_window']: self._vim.command('pclose!') # Clear previewed buffers for bufnr in self._vim.vars['denite#_previewed_buffers'].keys(): if not self._vim.call('win_findbuf', bufnr): self._vim.command('silent bdelete ' + str(bufnr)) self._vim.vars['denite#_previewed_buffers'] = {} self._vim.command('highlight! link CursorLine CursorLine') if self._floating or self._filter_floating: self._vim.options['titlestring'] = self._titlestring self._vim.options['ruler'] = self._ruler def _close_current_window(self) -> None: if self._vim.call('winnr', '$') == 1: self._vim.command('buffer #') else: self._vim.command('close!') def _quit_buffer(self) -> None: self._cleanup() if self._vim.call('bufwinnr', self._bufnr) < 0: # Denite buffer is already closed return winids = self._vim.call('win_findbuf', self._vim.vars['denite#_filter_bufnr']) if winids: # Quit filter buffer self._vim.call('win_gotoid', winids[0]) self._close_current_window() # Move to denite window self._vim.call('win_gotoid', self._winid) # Restore the window if self._context['split'] == 'no': self._switch_prev_buffer() for k, v in self._save_window_options.items(): self._vim.current.window.options[k] = v else: if self._context['split'] == 'tab': self._vim.command('tabclose!') if self._context['split'] != 'tab': self._close_current_window() self._vim.call('win_gotoid', self._prev_winid) # Restore the position self._vim.call('setpos', '.', self._prev_curpos) if self._get_wininfo() and self._get_wininfo() == self._prev_wininfo: # Note: execute restcmd twice to restore layout properly self._vim.command(self._winrestcmd) self._vim.command(self._winrestcmd) clearmatch(self._vim) def _get_cursor_candidate(self) -> Candidate: return self._get_candidate(self._cursor) def _get_candidate(self, pos: int) -> Candidate: if not self._candidates or pos > len(self._candidates): return {} return self._candidates[pos - 1] def _get_selected_candidates(self) -> Candidates: if not self._selected_candidates: return [self._get_cursor_candidate() ] if self._get_cursor_candidate() else [] return [self._candidates[x] for x in self._selected_candidates] def _init_denite(self) -> None: if self._denite: self._denite.start(self._context) self._denite.on_init(self._context) self._initialized = True self._winheight = self._context['winheight'] self._winwidth = self._context['winwidth'] def _gather_candidates(self) -> None: self._selected_candidates = [] if self._denite: self._denite.gather_candidates(self._context) def _init_cursor(self) -> None: if self._context['reversed']: self._move_to_last_line() else: self._move_to_first_line() def _move_to_pos(self, pos: int) -> None: self._vim.call('cursor', pos, 0) self._cursor = pos if self._context['reversed']: self._vim.command('normal! zb') def _move_to_next_line(self) -> None: if self._cursor < len(self._candidates): self._cursor += 1 def _move_to_prev_line(self) -> None: if self._cursor >= 1: self._cursor -= 1 def _move_to_first_line(self) -> None: self._cursor = 1 def _move_to_last_line(self) -> None: self._cursor = len(self._candidates) def _start_timer(self, key: str) -> None: if key in self._timers: return if key == 'update_candidates': self._timers[key] = self._vim.call( 'denite#helper#_start_update_candidates_timer', self._bufnr) elif key == 'update_buffer': self._timers[key] = self._vim.call( 'denite#helper#_start_update_buffer_timer', self._bufnr) def _stop_timer(self, key: str) -> None: if key not in self._timers: return self._vim.call('timer_stop', self._timers[key]) # Note: After timer_stop is called, self._timers may be removed if key in self._timers: self._timers.pop(key) def _split_floating(self, split: str) -> None: # Use floating window if split == 'floating': self._vim.call( 'nvim_open_win', self._vim.call('bufnr', '%'), True, { 'relative': 'editor', 'row': self._context['winrow'], 'col': self._context['wincol'], 'width': self._context['winwidth'], 'height': self._context['winheight'], }) elif split == 'floating_relative_cursor': opened_pos = (self._vim.call('nvim_win_get_position', 0)[0] + self._vim.call('winline') - 1) if self._context['auto_resize']: height = max(self._winheight, 1) width = max(self._winwidth, 1) else: width = self._context['winwidth'] height = self._context['winheight'] if opened_pos + height + 3 > self._vim.options['lines']: anchor = 'SW' row = 0 self._context['filter_winrow'] = row + opened_pos else: anchor = 'NW' row = 1 self._context['filter_winrow'] = row + height + opened_pos self._vim.call( 'nvim_open_win', self._vim.call('bufnr', '%'), True, { 'relative': 'cursor', 'row': row, 'col': 0, 'width': width, 'height': height, 'anchor': anchor, }) elif split == 'floating_relative_window': self._vim.call( 'nvim_open_win', self._vim.call('bufnr', '%'), True, { 'relative': 'win', 'row': self._context['winrow'], 'col': self._context['wincol'], 'width': self._context['winwidth'], 'height': self._context['winheight'], })
ValueCountFrequency (%)
107333
 
14.2%
the 15893
 
2.1%
if 12107
 
1.6%
def 9970
 
1.3%
1 9967
 
1.3%
in 9270
 
1.2%
return 7861
 
1.0%
for 7747
 
1.0%
import 6592
 
0.9%
to 6385
 
0.8%
Other values (107497) 562075
74.4%
2024-10-03T10:56:22.973262image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/

Most occurring characters

ValueCountFrequency (%)
2064436
24.0%
e 599898
 
7.0%
t 421851
 
4.9%
s 357516
 
4.2%
r 342430
 
4.0%
a 337965
 
3.9%
i 312858
 
3.6%
n 297564
 
3.5%
o 296809
 
3.5%
l 226308
 
2.6%
Other values (1645) 3343013
38.9%

Most occurring categories

ValueCountFrequency (%)
(unknown) 8600648
100.0%

Most frequent character per category

(unknown)
ValueCountFrequency (%)
2064436
24.0%
e 599898
 
7.0%
t 421851
 
4.9%
s 357516
 
4.2%
r 342430
 
4.0%
a 337965
 
3.9%
i 312858
 
3.6%
n 297564
 
3.5%
o 296809
 
3.5%
l 226308
 
2.6%
Other values (1645) 3343013
38.9%

Most occurring scripts

ValueCountFrequency (%)
(unknown) 8600648
100.0%

Most frequent character per script

(unknown)
ValueCountFrequency (%)
2064436
24.0%
e 599898
 
7.0%
t 421851
 
4.9%
s 357516
 
4.2%
r 342430
 
4.0%
a 337965
 
3.9%
i 312858
 
3.6%
n 297564
 
3.5%
o 296809
 
3.5%
l 226308
 
2.6%
Other values (1645) 3343013
38.9%

Most occurring blocks

ValueCountFrequency (%)
(unknown) 8600648
100.0%

Most frequent character per block

(unknown)
ValueCountFrequency (%)
2064436
24.0%
e 599898
 
7.0%
t 421851
 
4.9%
s 357516
 
4.2%
r 342430
 
4.0%
a 337965
 
3.9%
i 312858
 
3.6%
n 297564
 
3.5%
o 296809
 
3.5%
l 226308
 
2.6%
Other values (1645) 3343013
38.9%

avg_line_length
Real number (ℝ)

Distinct965
Distinct (%)91.4%
Missing0
Missing (%)0.0%
Infinite0
Infinite (%)0.0%
Mean33.043958
Minimum6.5
Maximum337.77255
Zeros0
Zeros (%)0.0%
Negative0
Negative (%)0.0%
Memory size16.5 KiB
2024-10-03T10:56:23.143262image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/

Quantile statistics

Minimum6.5
5-th percentile19.4375
Q127.08943
median32.457471
Q337.620174
95-th percentile46.138461
Maximum337.77255
Range331.27255
Interquartile range (IQR)10.530744

Descriptive statistics

Standard deviation14.489946
Coefficient of variation (CV)0.43850515
Kurtosis215.06814
Mean33.043958
Median Absolute Deviation (MAD)5.2435485
Skewness11.47083
Sum34894.42
Variance209.95853
MonotonicityNot monotonic
2024-10-03T10:56:23.228262image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
Histogram with fixed size bins (bins=50)
ValueCountFrequency (%)
32.77708333 7
 
0.7%
35.8368984 6
 
0.6%
30.15567282 6
 
0.6%
38.18767049 6
 
0.6%
38.17592593 5
 
0.5%
29 4
 
0.4%
38.87548638 4
 
0.4%
28.5 4
 
0.4%
47.9245283 4
 
0.4%
37.03121387 3
 
0.3%
Other values (955) 1007
95.4%
ValueCountFrequency (%)
6.5 1
0.1%
7.2 1
0.1%
8.4 1
0.1%
9.05 1
0.1%
9.166666667 1
0.1%
9.333333333 1
0.1%
10 1
0.1%
10.95652174 1
0.1%
11 1
0.1%
11.375 1
0.1%
ValueCountFrequency (%)
337.772549 1
0.1%
221.9415584 1
0.1%
141.8848347 1
0.1%
103.0554324 1
0.1%
70.04761905 1
0.1%
69 1
0.1%
68.45901639 1
0.1%
66.66165612 1
0.1%
65.53766234 1
0.1%
65.14285714 1
0.1%

max_line_length
Real number (ℝ)

SKEWED 

Distinct209
Distinct (%)19.8%
Missing0
Missing (%)0.0%
Infinite0
Infinite (%)0.0%
Mean163.11174
Minimum10
Maximum28647
Zeros0
Zeros (%)0.0%
Negative0
Negative (%)0.0%
Memory size16.5 KiB
2024-10-03T10:56:23.310261image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/

Quantile statistics

Minimum10
5-th percentile46.75
Q177
median88
Q3114
95-th percentile208
Maximum28647
Range28637
Interquartile range (IQR)37

Descriptive statistics

Standard deviation1027.1897
Coefficient of variation (CV)6.2974599
Kurtosis592.47872
Mean163.11174
Median Absolute Deviation (MAD)15
Skewness23.04552
Sum172246
Variance1055118.6
MonotonicityNot monotonic
2024-10-03T10:56:23.395262image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
Histogram with fixed size bins (bins=50)
ValueCountFrequency (%)
79 56
 
5.3%
78 43
 
4.1%
80 26
 
2.5%
77 26
 
2.5%
88 26
 
2.5%
97 21
 
2.0%
74 21
 
2.0%
96 19
 
1.8%
72 19
 
1.8%
87 19
 
1.8%
Other values (199) 780
73.9%
ValueCountFrequency (%)
10 1
 
0.1%
12 1
 
0.1%
13 1
 
0.1%
16 1
 
0.1%
17 1
 
0.1%
18 1
 
0.1%
21 3
0.3%
22 1
 
0.1%
23 1
 
0.1%
25 1
 
0.1%
ValueCountFrequency (%)
28647 1
0.1%
12884 1
0.1%
10350 1
0.1%
3675 1
0.1%
2854 1
0.1%
1878 1
0.1%
1463 1
0.1%
1390 1
0.1%
1357 1
0.1%
1301 1
0.1%

alphanum_fraction
Real number (ℝ)

Distinct991
Distinct (%)93.8%
Missing0
Missing (%)0.0%
Infinite0
Infinite (%)0.0%
Mean0.63554585
Minimum0.29768786
Maximum0.91150442
Zeros0
Zeros (%)0.0%
Negative0
Negative (%)0.0%
Memory size16.5 KiB
2024-10-03T10:56:23.478262image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/

Quantile statistics

Minimum0.29768786
5-th percentile0.51059973
Q10.58841698
median0.63573579
Q30.68345632
95-th percentile0.77197342
Maximum0.91150442
Range0.61381656
Interquartile range (IQR)0.095039347

Descriptive statistics

Standard deviation0.082392096
Coefficient of variation (CV)0.12963989
Kurtosis1.1539898
Mean0.63554585
Median Absolute Deviation (MAD)0.047460271
Skewness-0.21104951
Sum671.13642
Variance0.0067884575
MonotonicityNot monotonic
2024-10-03T10:56:23.560261image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
Histogram with fixed size bins (bins=50)
ValueCountFrequency (%)
0.5566643361 7
 
0.7%
0.6438856972 6
 
0.6%
0.6273514743 6
 
0.6%
0.6299037115 6
 
0.6%
0.6 5
 
0.5%
0.6403104536 5
 
0.5%
0.6633858268 4
 
0.4%
0.651886698 4
 
0.4%
0.6635983264 3
 
0.3%
0.5909403097 3
 
0.3%
Other values (981) 1007
95.4%
ValueCountFrequency (%)
0.2976878613 1
0.1%
0.336492891 1
0.1%
0.3409878127 1
0.1%
0.3476783692 1
0.1%
0.3484646195 1
0.1%
0.3751692129 1
0.1%
0.3794990877 1
0.1%
0.3814961547 1
0.1%
0.3849915533 1
0.1%
0.3894428152 1
0.1%
ValueCountFrequency (%)
0.9115044248 1
0.1%
0.875 1
0.1%
0.8666666667 1
0.1%
0.86 1
0.1%
0.8586956522 1
0.1%
0.8577586207 1
0.1%
0.8522427441 1
0.1%
0.8512396694 1
0.1%
0.8507462687 1
0.1%
0.8474576271 1
0.1%

Interactions

2024-10-03T10:56:16.707263image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:14.150204image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:14.573203image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:15.046203image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:15.447205image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:15.848203image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:16.294205image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:16.766263image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:14.210203image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:14.630205image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:15.101203image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:15.503203image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:15.921207image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:16.351203image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:16.832261image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:14.271203image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:14.688203image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:15.159204image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:15.561205image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:15.983206image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:16.410204image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:16.888263image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:14.331206image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:14.745208image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:15.217205image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:15.617204image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:16.045205image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:16.467206image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:16.946264image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:14.389204image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:14.866203image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:15.274203image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:15.673204image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:16.106205image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:16.523203image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:17.012261image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:14.453203image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:14.926203image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:15.335203image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:15.735206image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:16.170206image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:16.586205image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:17.073262image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:14.512205image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:14.985203image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:15.393203image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:15.790203image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:16.232203image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
2024-10-03T10:56:16.644257image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/

Missing values

2024-10-03T10:56:17.185263image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
A simple visualization of nullity by column.
2024-10-03T10:56:17.413263image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
Nullity matrix is a data-dense display which lets you quickly visually pick out patterns in data completion.
2024-10-03T10:56:17.653263image/svg+xmlMatplotlib v3.8.0, https://matplotlib.org/
The correlation heatmap measures nullity correlation: how strongly the presence or absence of one variable affects the presence of another.

Sample

hexshasizeextlangmax_stars_repo_pathmax_stars_repo_namemax_stars_repo_head_hexshamax_stars_repo_licensesmax_stars_countmax_stars_repo_stars_event_min_datetimemax_stars_repo_stars_event_max_datetimemax_issues_repo_pathmax_issues_repo_namemax_issues_repo_head_hexshamax_issues_repo_licensesmax_issues_countmax_issues_repo_issues_event_min_datetimemax_issues_repo_issues_event_max_datetimemax_forks_repo_pathmax_forks_repo_namemax_forks_repo_head_hexshamax_forks_repo_licensesmax_forks_countmax_forks_repo_forks_event_min_datetimemax_forks_repo_forks_event_max_datetimecontentavg_line_lengthmax_line_lengthalphanum_fraction
0d99a1e98eccb58cbc0c0cef6e9e6702f33461b0e5886pyPythonpublic_data/serializers.pyMTES-MCT/sparte3b8ae6d21da81ca761d64ae9dfe2c8f54487211cMITNoneNoneNonepublic_data/serializers.pyMTES-MCT/sparte3b8ae6d21da81ca761d64ae9dfe2c8f54487211cMIT32022-02-10T11:47:58.000Z2022-02-23T18:50:24.000Zpublic_data/serializers.pyMTES-MCT/sparte3b8ae6d21da81ca761d64ae9dfe2c8f54487211cMITNoneNoneNonefrom rest_framework_gis import serializers\nfrom rest_framework import serializers as s\n\nfrom .models import (\n Artificialisee2015to2018,\n Artificielle2018,\n CommunesSybarval,\n CouvertureSol,\n EnveloppeUrbaine2018,\n Ocsge,\n Renaturee2018to2015,\n Sybarval,\n Voirie2018,\n ZonesBaties2018,\n UsageSol,\n)\n\n\ndef get_label(code="", label=""):\n if code is None:\n code = "-"\n if label is None:\n label = "inconnu"\n return f"{code} {label[:30]}"\n\n\nclass Artificialisee2015to2018Serializer(serializers.GeoFeatureModelSerializer):\n usage_2015 = s.SerializerMethodField()\n usage_2018 = s.SerializerMethodField()\n couverture_2015 = s.SerializerMethodField()\n couverture_2018 = s.SerializerMethodField()\n\n def get_usage_2015(self, obj):\n return get_label(code=obj.us_2015, label=obj.us_2015_label)\n\n def get_usage_2018(self, obj):\n return get_label(code=obj.us_2018, label=obj.us_2018_label)\n\n def get_couverture_2015(self, obj):\n return get_label(code=obj.cs_2015, label=obj.cs_2015_label)\n\n def get_couverture_2018(self, obj):\n return get_label(code=obj.cs_2018, label=obj.cs_2018_label)\n\n class Meta:\n fields = (\n "id",\n "surface",\n "usage_2015",\n "usage_2018",\n "couverture_2015",\n "couverture_2018",\n )\n geo_field = "mpoly"\n model = Artificialisee2015to2018\n\n\nclass Artificielle2018Serializer(serializers.GeoFeatureModelSerializer):\n couverture = s.SerializerMethodField()\n\n def get_couverture(self, obj):\n return get_label(code=obj.couverture, label=obj.couverture_label)\n\n class Meta:\n fields = (\n "id",\n "surface",\n "couverture",\n )\n geo_field = "mpoly"\n model = Artificielle2018\n\n\nclass CommunesSybarvalSerializer(serializers.GeoFeatureModelSerializer):\n """Marker GeoJSON serializer."""\n\n class Meta:\n """Marker serializer meta class."""\n\n fields = (\n "nom",\n "code_insee",\n "surface",\n )\n geo_field = "mpoly"\n model = CommunesSybarval\n\n\nclass EnveloppeUrbaine2018Serializer(serializers.GeoFeatureModelSerializer):\n couverture = s.SerializerMethodField()\n\n def get_couverture(self, obj):\n return get_label(code=obj.couverture, label=obj.couverture_label)\n\n class Meta:\n fields = (\n "id",\n "couverture",\n "surface",\n )\n geo_field = "mpoly"\n model = EnveloppeUrbaine2018\n\n\nclass OcsgeSerializer(serializers.GeoFeatureModelSerializer):\n couverture = s.SerializerMethodField()\n usage = s.SerializerMethodField()\n\n def get_couverture(self, obj):\n return get_label(code=obj.couverture, label=obj.couverture_label)\n\n def get_usage(self, obj):\n return get_label(code=obj.usage, label=obj.usage_label)\n\n class Meta:\n fields = (\n "id",\n "couverture",\n "usage",\n "millesime",\n "map_color",\n "year",\n )\n geo_field = "mpoly"\n model = Ocsge\n\n\nclass Renaturee2018to2015Serializer(serializers.GeoFeatureModelSerializer):\n usage_2015 = s.SerializerMethodField()\n usage_2018 = s.SerializerMethodField()\n couverture_2015 = s.SerializerMethodField()\n couverture_2018 = s.SerializerMethodField()\n\n def get_usage_2015(self, obj):\n return get_label(code=obj.us_2015, label=obj.us_2015_label)\n\n def get_usage_2018(self, obj):\n return get_label(code=obj.us_2018, label=obj.us_2018_label)\n\n def get_couverture_2015(self, obj):\n return get_label(code=obj.cs_2015, label=obj.cs_2015_label)\n\n def get_couverture_2018(self, obj):\n return get_label(code=obj.cs_2018, label=obj.cs_2018_label)\n\n class Meta:\n fields = (\n "id",\n "surface",\n "usage_2015",\n "usage_2018",\n "couverture_2015",\n "couverture_2018",\n )\n geo_field = "mpoly"\n model = Renaturee2018to2015\n\n\nclass SybarvalSerializer(serializers.GeoFeatureModelSerializer):\n class Meta:\n fields = (\n "id",\n "surface",\n )\n geo_field = "mpoly"\n model = Sybarval\n\n\nclass Voirie2018Serializer(serializers.GeoFeatureModelSerializer):\n couverture = s.SerializerMethodField()\n usage = s.SerializerMethodField()\n\n def get_couverture(self, obj):\n return get_label(code=obj.couverture, label=obj.couverture_label)\n\n def get_usage(self, obj):\n return get_label(code=obj.usage, label=obj.usage_label)\n\n class Meta:\n fields = (\n "id",\n "surface",\n "couverture",\n "usage",\n )\n geo_field = "mpoly"\n model = Voirie2018\n\n\nclass ZonesBaties2018Serializer(serializers.GeoFeatureModelSerializer):\n couverture = s.SerializerMethodField()\n usage = s.SerializerMethodField()\n\n def get_couverture(self, obj):\n return get_label(code=obj.couverture, label=obj.couverture_label)\n\n def get_usage(self, obj):\n return get_label(code=obj.usage, label=obj.usage_label)\n\n class Meta:\n fields = (\n "id",\n "couverture",\n "usage",\n "surface",\n )\n geo_field = "mpoly"\n model = ZonesBaties2018\n\n\nclass CouvertureSolSerializer(serializers.ModelSerializer):\n class Meta:\n fields = (\n "id",\n "parent",\n "code",\n "label",\n "is_artificial",\n )\n model = CouvertureSol\n\n\nclass UsageSolSerializer(serializers.ModelSerializer):\n class Meta:\n fields = (\n "id",\n "parent",\n "code",\n "label",\n )\n model = UsageSol\n25.370690800.613829
0d99a20277c32bb1e28312f42ab6d732f38323169241pyPythonquick_search/admin.pynaman1901/django-quick-search7b93554ed9fa4721e52372f9fd1a395d94cc04a7MITNoneNoneNonequick_search/admin.pynaman1901/django-quick-search7b93554ed9fa4721e52372f9fd1a395d94cc04a7MIT22020-02-11T23:28:22.000Z2020-06-05T19:27:40.000Zquick_search/admin.pyHereWithoutPermission/django-quick-search7b93554ed9fa4721e52372f9fd1a395d94cc04a7MITNoneNoneNonefrom django.contrib import admin\nfrom .models import SearchResult\n\n# Register your models here.\nclass SearchResultAdmin(admin.ModelAdmin):\n fields = ["query", "heading", "url", "text"]\n\nadmin.site.register(SearchResult, SearchResultAdmin)30.125000520.771784
0d99b5ab0ec594ac30b1d197b23a5cda7c48151d518065pyPythonrasa/train.pyAmirali-Shirkh/rasa-for-botfront36aa24ad31241c5d1a180bbe34e1c8c50da40ff7Apache-2.0NoneNoneNonerasa/train.pyAmirali-Shirkh/rasa-for-botfront36aa24ad31241c5d1a180bbe34e1c8c50da40ff7Apache-2.0NoneNoneNonerasa/train.pyAmirali-Shirkh/rasa-for-botfront36aa24ad31241c5d1a180bbe34e1c8c50da40ff7Apache-2.0NoneNoneNoneimport asyncio\nimport os\nimport tempfile\nfrom contextlib import ExitStack\nfrom typing import Text, Optional, List, Union, Dict\n\nfrom rasa.importers.importer import TrainingDataImporter\nfrom rasa import model\nfrom rasa.model import FingerprintComparisonResult\nfrom rasa.core.domain import Domain\nfrom rasa.utils.common import TempDirectoryPath\n\nfrom rasa.cli.utils import (\n print_success,\n print_warning,\n print_error,\n bcolors,\n print_color,\n)\nfrom rasa.constants import DEFAULT_MODELS_PATH, DEFAULT_CORE_SUBDIRECTORY_NAME\n\n\ndef train(\n domain: Text,\n config: Text,\n training_files: Union[Text, List[Text]],\n output: Text = DEFAULT_MODELS_PATH,\n force_training: bool = False,\n fixed_model_name: Optional[Text] = None,\n persist_nlu_training_data: bool = False,\n additional_arguments: Optional[Dict] = None,\n loop: Optional[asyncio.AbstractEventLoop] = None,\n) -> Optional[Text]:\n if loop is None:\n try:\n loop = asyncio.get_event_loop()\n except RuntimeError:\n loop = asyncio.new_event_loop()\n asyncio.set_event_loop(loop)\n\n return loop.run_until_complete(\n train_async(\n domain=domain,\n config=config,\n training_files=training_files,\n output_path=output,\n force_training=force_training,\n fixed_model_name=fixed_model_name,\n persist_nlu_training_data=persist_nlu_training_data,\n additional_arguments=additional_arguments,\n )\n )\n\n\nasync def train_async(\n domain: Union[Domain, Text],\n config: Dict[Text, Text],\n training_files: Optional[Union[Text, List[Text]]],\n output_path: Text = DEFAULT_MODELS_PATH,\n force_training: bool = False,\n fixed_model_name: Optional[Text] = None,\n persist_nlu_training_data: bool = False,\n additional_arguments: Optional[Dict] = None,\n) -> Optional[Text]:\n """Trains a Rasa model (Core and NLU).\n\n Args:\n domain: Path to the domain file.\n config: Dict of paths to the config for Core and NLU. Keys are language codes\n training_files: Paths to the training data for Core and NLU.\n output_path: Output path.\n force_training: If `True` retrain model even if data has not changed.\n fixed_model_name: Name of model to be stored.\n persist_nlu_training_data: `True` if the NLU training data should be persisted\n with the model.\n additional_arguments: Additional training parameters.\n\n Returns:\n Path of the trained model archive.\n """\n\n # file_importer = TrainingDataImporter.load_from_config(\n # config, domain, training_files\n # )\n\n with ExitStack() as stack:\n train_path = stack.enter_context(TempDirectoryPath(tempfile.mkdtemp()))\n\n # bf mod\n from rasa_addons.importers import BotfrontFileImporter\n file_importer = BotfrontFileImporter(config, domain, training_files)\n # domain = await file_importer.get_domain()\n # if domain.is_empty():\n # return await handle_domain_if_not_exists(\n # file_importer, output_path, fixed_model_name\n # )\n # /bf mod\n\n return await _train_async_internal(\n file_importer,\n train_path,\n output_path,\n force_training,\n fixed_model_name,\n persist_nlu_training_data,\n additional_arguments,\n )\n\n\nasync def handle_domain_if_not_exists(\n file_importer: TrainingDataImporter, output_path, fixed_model_name\n):\n nlu_model_only = await _train_nlu_with_validated_data(\n file_importer, output=output_path, fixed_model_name=fixed_model_name\n )\n print_warning(\n "Core training was skipped because no valid domain file was found. Only an nlu-model was created."\n "Please specify a valid domain using '--domain' argument or check if the provided domain file exists."\n )\n return nlu_model_only\n\n\nasync def _train_async_internal(\n file_importer: TrainingDataImporter,\n train_path: Text,\n output_path: Text,\n force_training: bool,\n fixed_model_name: Optional[Text],\n persist_nlu_training_data: bool,\n additional_arguments: Optional[Dict],\n) -> Optional[Text]:\n """Trains a Rasa model (Core and NLU). Use only from `train_async`.\n\n Args:\n file_importer: `TrainingDataImporter` which supplies the training data.\n train_path: Directory in which to train the model.\n output_path: Output path.\n force_training: If `True` retrain model even if data has not changed.\n persist_nlu_training_data: `True` if the NLU training data should be persisted\n with the model.\n fixed_model_name: Name of model to be stored.\n additional_arguments: Additional training parameters.\n\n Returns:\n Path of the trained model archive.\n """\n\n stories, nlu_data = await asyncio.gather(\n file_importer.get_stories(), file_importer.get_nlu_data()\n )\n\n # if stories.is_empty() and nlu_data.is_empty():\n # print_error(\n # "No training data given. Please provide stories and NLU data in "\n # "order to train a Rasa model using the '--data' argument."\n # )\n # return\n\n # if nlu_data.is_empty():\n # print_warning("No NLU data present. Just a Rasa Core model will be trained.")\n # return await _train_core_with_validated_data(\n # file_importer,\n # output=output_path,\n # fixed_model_name=fixed_model_name,\n # additional_arguments=additional_arguments,\n # )\n\n new_fingerprint = await model.model_fingerprint(file_importer)\n old_model = model.get_latest_model(output_path)\n fingerprint_comparison = FingerprintComparisonResult(force_training=force_training)\n if not force_training:\n fingerprint_comparison = model.should_retrain(\n new_fingerprint, old_model, train_path\n )\n\n # bf mod >\n if fingerprint_comparison.nlu == True: # replace True with list of all langs\n fingerprint_comparison.nlu = list(new_fingerprint.get("nlu-config", {}).keys())\n domain = await file_importer.get_domain()\n core_untrainable = domain.is_empty() or stories.is_empty()\n nlu_untrainable = [l for l, d in nlu_data.items() if d.is_empty()]\n fingerprint_comparison.core = fingerprint_comparison.core and not core_untrainable\n fingerprint_comparison.nlu = [l for l in fingerprint_comparison.nlu if l not in nlu_untrainable]\n\n if core_untrainable:\n print_color("Skipping Core training since domain or stories are empty.", color=bcolors.OKBLUE)\n for lang in nlu_untrainable:\n print_color("No NLU data found for language <{}>, skipping training...".format(lang), color=bcolors.OKBLUE)\n # </ bf mod\n\n if fingerprint_comparison.is_training_required():\n await _do_training(\n file_importer,\n output_path=output_path,\n train_path=train_path,\n fingerprint_comparison_result=fingerprint_comparison,\n fixed_model_name=fixed_model_name,\n persist_nlu_training_data=persist_nlu_training_data,\n additional_arguments=additional_arguments,\n )\n\n return model.package_model(\n fingerprint=new_fingerprint,\n output_directory=output_path,\n train_path=train_path,\n fixed_model_name=fixed_model_name,\n )\n\n print_success(\n "Nothing changed. You can use the old model stored at '{}'."\n "".format(os.path.abspath(old_model))\n )\n return old_model\n\n\nasync def _do_training(\n file_importer: TrainingDataImporter,\n output_path: Text,\n train_path: Text,\n fingerprint_comparison_result: Optional[FingerprintComparisonResult] = None,\n fixed_model_name: Optional[Text] = None,\n persist_nlu_training_data: bool = False,\n additional_arguments: Optional[Dict] = None,\n):\n if not fingerprint_comparison_result:\n fingerprint_comparison_result = FingerprintComparisonResult()\n\n if fingerprint_comparison_result.should_retrain_core():\n await _train_core_with_validated_data(\n file_importer,\n output=output_path,\n train_path=train_path,\n fixed_model_name=fixed_model_name,\n additional_arguments=additional_arguments,\n )\n elif fingerprint_comparison_result.should_retrain_nlg():\n print_color(\n "Core stories/configuration did not change. "\n "Only the templates section has been changed. A new model with "\n "the updated templates will be created.",\n color=bcolors.OKBLUE,\n )\n await model.update_model_with_new_domain(file_importer, train_path)\n else:\n print_color(\n "Core stories/configuration did not change. No need to retrain Core model.",\n color=bcolors.OKBLUE,\n )\n\n if fingerprint_comparison_result.should_retrain_nlu():\n await _train_nlu_with_validated_data(\n file_importer,\n output=output_path,\n train_path=train_path,\n fixed_model_name=fixed_model_name,\n retrain_nlu=fingerprint_comparison_result.nlu,\n persist_nlu_training_data=persist_nlu_training_data,\n )\n else:\n print_color(\n "NLU data/configuration did not change. No need to retrain NLU model.",\n color=bcolors.OKBLUE,\n )\n\n\ndef train_core(\n domain: Union[Domain, Text],\n config: Text,\n stories: Text,\n output: Text,\n train_path: Optional[Text] = None,\n fixed_model_name: Optional[Text] = None,\n additional_arguments: Optional[Dict] = None,\n) -> Optional[Text]:\n loop = asyncio.get_event_loop()\n return loop.run_until_complete(\n train_core_async(\n domain=domain,\n config=config,\n stories=stories,\n output=output,\n train_path=train_path,\n fixed_model_name=fixed_model_name,\n additional_arguments=additional_arguments,\n )\n )\n\n\nasync def train_core_async(\n domain: Union[Domain, Text],\n config: Text,\n stories: Text,\n output: Text,\n train_path: Optional[Text] = None,\n fixed_model_name: Optional[Text] = None,\n additional_arguments: Optional[Dict] = None,\n) -> Optional[Text]:\n """Trains a Core model.\n\n Args:\n domain: Path to the domain file.\n config: Path to the config file for Core.\n stories: Path to the Core training data.\n output: Output path.\n train_path: If `None` the model will be trained in a temporary\n directory, otherwise in the provided directory.\n fixed_model_name: Name of model to be stored.\n uncompress: If `True` the model will not be compressed.\n additional_arguments: Additional training parameters.\n\n Returns:\n If `train_path` is given it returns the path to the model archive,\n otherwise the path to the directory with the trained model files.\n\n """\n\n file_importer = TrainingDataImporter.load_core_importer_from_config(\n config, domain, [stories]\n )\n domain = await file_importer.get_domain()\n if domain.is_empty():\n print_error(\n "Core training was skipped because no valid domain file was found. "\n "Please specify a valid domain using '--domain' argument or check if the provided domain file exists."\n )\n return None\n\n if not await file_importer.get_stories():\n print_error(\n "No stories given. Please provide stories in order to "\n "train a Rasa Core model using the '--stories' argument."\n )\n return\n\n return await _train_core_with_validated_data(\n file_importer,\n output=output,\n train_path=train_path,\n fixed_model_name=fixed_model_name,\n additional_arguments=additional_arguments,\n )\n\n\nasync def _train_core_with_validated_data(\n file_importer: TrainingDataImporter,\n output: Text,\n train_path: Optional[Text] = None,\n fixed_model_name: Optional[Text] = None,\n additional_arguments: Optional[Dict] = None,\n) -> Optional[Text]:\n """Train Core with validated training and config data."""\n\n import rasa.core.train\n\n with ExitStack() as stack:\n if train_path:\n # If the train path was provided, do nothing on exit.\n _train_path = train_path\n else:\n # Otherwise, create a temp train path and clean it up on exit.\n _train_path = stack.enter_context(TempDirectoryPath(tempfile.mkdtemp()))\n\n # normal (not compare) training\n print_color("Training Core model...", color=bcolors.OKBLUE)\n domain, config = await asyncio.gather(\n file_importer.get_domain(), file_importer.get_config()\n )\n await rasa.core.train(\n domain_file=domain,\n training_resource=file_importer,\n output_path=os.path.join(_train_path, DEFAULT_CORE_SUBDIRECTORY_NAME),\n policy_config=config,\n additional_arguments=additional_arguments,\n )\n print_color("Core model training completed.", color=bcolors.OKBLUE)\n\n if train_path is None:\n # Only Core was trained.\n new_fingerprint = await model.model_fingerprint(file_importer)\n return model.package_model(\n fingerprint=new_fingerprint,\n output_directory=output,\n train_path=_train_path,\n fixed_model_name=fixed_model_name,\n model_prefix="core-",\n )\n\n return _train_path\n\n\ndef train_nlu(\n config: Text,\n nlu_data: Text,\n output: Text,\n train_path: Optional[Text] = None,\n fixed_model_name: Optional[Text] = None,\n persist_nlu_training_data: bool = False,\n) -> Optional[Text]:\n """Trains an NLU model.\n\n Args:\n config: Path to the config file for NLU.\n nlu_data: Path to the NLU training data.\n output: Output path.\n train_path: If `None` the model will be trained in a temporary\n directory, otherwise in the provided directory.\n fixed_model_name: Name of the model to be stored.\n persist_nlu_training_data: `True` if the NLU training data should be persisted\n with the model.\n\n\n Returns:\n If `train_path` is given it returns the path to the model archive,\n otherwise the path to the directory with the trained model files.\n\n """\n\n loop = asyncio.get_event_loop()\n return loop.run_until_complete(\n _train_nlu_async(\n config,\n nlu_data,\n output,\n train_path,\n fixed_model_name,\n persist_nlu_training_data,\n )\n )\n\n\nasync def _train_nlu_async(\n config: Text,\n nlu_data: Text,\n output: Text,\n train_path: Optional[Text] = None,\n fixed_model_name: Optional[Text] = None,\n persist_nlu_training_data: bool = False,\n):\n if not nlu_data:\n print_error(\n "No NLU data given. Please provide NLU data in order to train "\n "a Rasa NLU model using the '--nlu' argument."\n )\n return\n\n # training NLU only hence the training files still have to be selected\n file_importer = TrainingDataImporter.load_nlu_importer_from_config(\n config, training_data_paths=[nlu_data]\n )\n\n training_datas = await file_importer.get_nlu_data()\n if training_datas.is_empty():\n print_error(\n f"Path '{nlu_data}' doesn't contain valid NLU data in it. "\n "Please verify the data format. "\n "The NLU model training will be skipped now."\n )\n return\n\n return await _train_nlu_with_validated_data(\n file_importer,\n output=output,\n train_path=train_path,\n fixed_model_name=fixed_model_name,\n persist_nlu_training_data=persist_nlu_training_data,\n )\n\n\nasync def _train_nlu_with_validated_data(\n file_importer: TrainingDataImporter,\n output: Text,\n train_path: Optional[Text] = None,\n fixed_model_name: Optional[Text] = None,\n persist_nlu_training_data: bool = False,\n retrain_nlu: Union[bool, List[Text]] = True\n) -> Optional[Text]:\n """Train NLU with validated training and config data."""\n\n import rasa.nlu.train\n\n with ExitStack() as stack:\n models = {}\n from rasa.nlu import config as cfg_loader\n\n if train_path:\n # If the train path was provided, do nothing on exit.\n _train_path = train_path\n else:\n # Otherwise, create a temp train path and clean it up on exit.\n _train_path = stack.enter_context(TempDirectoryPath(tempfile.mkdtemp()))\n # bf mod\n config = await file_importer.get_nlu_config(retrain_nlu)\n for lang in config:\n if config[lang]:\n print_color("Start training {} NLU model ...".format(lang), color=bcolors.OKBLUE)\n _, models[lang], _ = await rasa.nlu.train(\n config[lang],\n file_importer,\n _train_path,\n fixed_model_name="nlu-{}".format(lang),\n persist_nlu_training_data=persist_nlu_training_data,\n )\n else:\n print_color("NLU data for language <{}> didn't change, skipping training...".format(lang), color=bcolors.OKBLUE)\n # /bf mod\n print_color("NLU model training completed.", color=bcolors.OKBLUE)\n\n if train_path is None:\n # Only NLU was trained\n new_fingerprint = await model.model_fingerprint(file_importer)\n\n return model.package_model(\n fingerprint=new_fingerprint,\n output_directory=output,\n train_path=_train_path,\n fixed_model_name=fixed_model_name,\n model_prefix="nlu-",\n )\n\n return _train_path\n34.6737041280.654027
0d99e8a9a95f28da6c2d4d1ee42e95a270ab08977421pyPythoncoding_intereview/1475. Final Prices With a Special Discount in a Shop.pyJahidul007/Python-Bootcamp3c870587465ff66c2c1871c8d3c4eea72463abdaMIT22020-12-07T16:07:07.000Z2020-12-07T16:08:53.000Zcoding_intereview/1475. Final Prices With a Special Discount in a Shop.pypurusharthmalik/Python-Bootcamp2ed1cf886d1081de200b0fdd4cb4e28008c7e3d1MITNoneNoneNonecoding_intereview/1475. Final Prices With a Special Discount in a Shop.pypurusharthmalik/Python-Bootcamp2ed1cf886d1081de200b0fdd4cb4e28008c7e3d1MIT12020-10-03T16:38:02.000Z2020-10-03T16:38:02.000Zclass Solution:\n def finalPrices(self, prices: List[int]) -> List[int]:\n res = []\n for i in range(len(prices)):\n for j in range(i+1,len(prices)):\n if prices[j]<=prices[i]:\n res.append(prices[i]-prices[j])\n break\n if j==len(prices)-1:\n res.append(prices[i])\n res.append(prices[-1])\n return res35.083333580.460808
0d99ed7256245422c7c5dd3c60b0661e4f78183ea35585pyPythonrplugin/python3/denite/ui/default.pytimgates42/denite.nvim12a9b5456f5a4600afeb0ba284ce1098bd35e501MITNoneNoneNonerplugin/python3/denite/ui/default.pytimgates42/denite.nvim12a9b5456f5a4600afeb0ba284ce1098bd35e501MITNoneNoneNonerplugin/python3/denite/ui/default.pytimgates42/denite.nvim12a9b5456f5a4600afeb0ba284ce1098bd35e501MITNoneNoneNone# ============================================================================\n# FILE: default.py\n# AUTHOR: Shougo Matsushita <Shougo.Matsu at gmail.com>\n# License: MIT license\n# ============================================================================\n\nimport re\nimport typing\n\nfrom denite.util import echo, error, clearmatch, regex_convert_py_vim\nfrom denite.util import Nvim, UserContext, Candidates, Candidate\nfrom denite.parent import SyncParent\n\n\nclass Default(object):\n @property\n def is_async(self) -> bool:\n return self._is_async\n\n def __init__(self, vim: Nvim) -> None:\n self._vim = vim\n self._denite: typing.Optional[SyncParent] = None\n self._selected_candidates: typing.List[int] = []\n self._candidates: Candidates = []\n self._cursor = 0\n self._entire_len = 0\n self._result: typing.List[typing.Any] = []\n self._context: UserContext = {}\n self._bufnr = -1\n self._winid = -1\n self._winrestcmd = ''\n self._initialized = False\n self._winheight = 0\n self._winwidth = 0\n self._winminheight = -1\n self._is_multi = False\n self._is_async = False\n self._matched_pattern = ''\n self._displayed_texts: typing.List[str] = []\n self._statusline_sources = ''\n self._titlestring = ''\n self._ruler = False\n self._prev_action = ''\n self._prev_status: typing.Dict[str, typing.Any] = {}\n self._prev_curpos: typing.List[typing.Any] = []\n self._save_window_options: typing.Dict[str, typing.Any] = {}\n self._sources_history: typing.List[typing.Any] = []\n self._previous_text = ''\n self._floating = False\n self._filter_floating = False\n self._updated = False\n self._timers: typing.Dict[str, int] = {}\n self._matched_range_id = -1\n self._matched_char_id = -1\n self._check_matchdelete = bool(self._vim.call(\n 'denite#util#check_matchdelete'))\n\n def start(self, sources: typing.List[typing.Any],\n context: UserContext) -> typing.List[typing.Any]:\n if not self._denite:\n # if hasattr(self._vim, 'run_coroutine'):\n # self._denite = ASyncParent(self._vim)\n # else:\n self._denite = SyncParent(self._vim)\n\n self._result = []\n context['sources_queue'] = [sources]\n\n self._start_sources_queue(context)\n\n return self._result\n\n def do_action(self, action_name: str,\n command: str = '', is_manual: bool = False) -> None:\n if is_manual:\n candidates = self._get_selected_candidates()\n elif self._get_cursor_candidate():\n candidates = [self._get_cursor_candidate()]\n else:\n candidates = []\n\n if not self._denite or not candidates or not action_name:\n return\n\n self._prev_action = action_name\n action = self._denite.get_action(\n self._context, action_name, candidates)\n if not action:\n return\n\n post_action = self._context['post_action']\n\n is_quit = action['is_quit'] or post_action == 'quit'\n if is_quit:\n self.quit()\n\n self._denite.do_action(self._context, action_name, candidates)\n self._result = candidates\n if command != '':\n self._vim.command(command)\n\n if is_quit and post_action == 'open':\n # Re-open denite buffer\n\n prev_cursor = self._cursor\n cursor_candidate = self._get_cursor_candidate()\n\n self._init_buffer()\n\n self.redraw(False)\n\n if cursor_candidate == self._get_candidate(prev_cursor):\n # Restore the cursor\n self._move_to_pos(prev_cursor)\n\n # Disable quit flag\n is_quit = False\n\n if not is_quit and is_manual:\n self._selected_candidates = []\n self.redraw(action['is_redraw'])\n\n if is_manual and self._context['sources_queue']:\n self._context['input'] = ''\n self._context['quick_move'] = ''\n self._start_sources_queue(self._context)\n\n return\n\n def redraw(self, is_force: bool = True) -> None:\n self._context['is_redraw'] = is_force\n if is_force:\n self._gather_candidates()\n if self._update_candidates():\n self._update_buffer()\n else:\n self._update_status()\n self._context['is_redraw'] = False\n\n def quit(self) -> None:\n if self._denite:\n self._denite.on_close(self._context)\n self._quit_buffer()\n self._result = []\n return\n\n def _restart(self) -> None:\n self._context['input'] = ''\n self._quit_buffer()\n self._init_denite()\n self._gather_candidates()\n self._init_buffer()\n self._update_candidates()\n self._update_buffer()\n\n def _start_sources_queue(self, context: UserContext) -> None:\n if not context['sources_queue']:\n return\n\n self._sources_history.append({\n 'sources': context['sources_queue'][0],\n 'path': context['path'],\n })\n\n self._start(context['sources_queue'][0], context)\n\n if context['sources_queue']:\n context['sources_queue'].pop(0)\n context['path'] = self._context['path']\n\n def _start(self, sources: typing.List[typing.Any],\n context: UserContext) -> None:\n from denite.ui.map import do_map\n\n self._vim.command('silent! autocmd! denite')\n\n if re.search(r'\[Command Line\]$', self._vim.current.buffer.name):\n # Ignore command line window.\n return\n\n resume = self._initialized and context['resume']\n if resume:\n # Skip the initialization\n\n update = ('immediately', 'immediately_1',\n 'cursor_pos', 'prev_winid',\n 'start_filter', 'quick_move')\n for key in update:\n self._context[key] = context[key]\n\n self._check_move_option()\n if self._check_do_option():\n return\n\n self._init_buffer()\n if context['refresh']:\n self.redraw()\n self._move_to_pos(self._cursor)\n else:\n if self._context != context:\n self._context.clear()\n self._context.update(context)\n self._context['sources'] = sources\n self._context['is_redraw'] = False\n self._is_multi = len(sources) > 1\n\n if not sources:\n # Ignore empty sources.\n error(self._vim, 'Empty sources')\n return\n\n self._init_denite()\n self._gather_candidates()\n self._update_candidates()\n\n self._init_cursor()\n self._check_move_option()\n if self._check_do_option():\n return\n\n self._init_buffer()\n\n self._update_displayed_texts()\n self._update_buffer()\n self._move_to_pos(self._cursor)\n\n if self._context['quick_move'] and do_map(self, 'quick_move', []):\n return\n\n if self._context['start_filter']:\n do_map(self, 'open_filter_buffer', [])\n\n def _init_buffer(self) -> None:\n self._prev_status = dict()\n self._displayed_texts = []\n\n self._prev_bufnr = self._vim.current.buffer.number\n self._prev_curpos = self._vim.call('getcurpos')\n self._prev_wininfo = self._get_wininfo()\n self._prev_winid = self._context['prev_winid']\n self._winrestcmd = self._vim.call('winrestcmd')\n\n self._ruler = self._vim.options['ruler']\n\n self._switch_buffer()\n self._bufnr = self._vim.current.buffer.number\n self._winid = self._vim.call('win_getid')\n\n self._resize_buffer(True)\n\n self._winheight = self._vim.current.window.height\n self._winwidth = self._vim.current.window.width\n\n self._bufvars = self._vim.current.buffer.vars\n self._bufvars['denite'] = {\n 'buffer_name': self._context['buffer_name'],\n }\n self._bufvars['denite_statusline'] = {}\n\n self._vim.vars['denite#_previewed_buffers'] = {}\n\n self._save_window_options = {}\n window_options = {\n 'colorcolumn',\n 'concealcursor',\n 'conceallevel',\n 'cursorcolumn',\n 'cursorline',\n 'foldcolumn',\n 'foldenable',\n 'list',\n 'number',\n 'relativenumber',\n 'signcolumn',\n 'spell',\n 'winfixheight',\n 'wrap',\n }\n for k in window_options:\n self._save_window_options[k] = self._vim.current.window.options[k]\n\n # Note: Have to use setlocal instead of "current.window.options"\n # "current.window.options" changes global value instead of local in\n # neovim.\n self._vim.command('setlocal colorcolumn=')\n self._vim.command('setlocal conceallevel=3')\n self._vim.command('setlocal concealcursor=inv')\n self._vim.command('setlocal nocursorcolumn')\n self._vim.command('setlocal nofoldenable')\n self._vim.command('setlocal foldcolumn=0')\n self._vim.command('setlocal nolist')\n self._vim.command('setlocal nonumber')\n self._vim.command('setlocal norelativenumber')\n self._vim.command('setlocal nospell')\n self._vim.command('setlocal winfixheight')\n self._vim.command('setlocal nowrap')\n if self._context['prompt']:\n self._vim.command('setlocal signcolumn=yes')\n else:\n self._vim.command('setlocal signcolumn=auto')\n if self._context['cursorline']:\n self._vim.command('setlocal cursorline')\n\n options = self._vim.current.buffer.options\n if self._floating:\n # Disable ruler\n self._vim.options['ruler'] = False\n options['buftype'] = 'nofile'\n options['bufhidden'] = 'delete'\n options['swapfile'] = False\n options['buflisted'] = False\n options['modeline'] = False\n options['modifiable'] = False\n options['filetype'] = 'denite'\n\n if self._vim.call('exists', '#WinEnter'):\n self._vim.command('doautocmd WinEnter')\n\n if self._vim.call('exists', '#BufWinEnter'):\n self._vim.command('doautocmd BufWinEnter')\n\n if not self._vim.call('has', 'nvim'):\n # In Vim8, FileType autocmd is not fired after set filetype option.\n self._vim.command('silent doautocmd FileType denite')\n\n if self._context['auto_action']:\n self._vim.command('autocmd denite '\n 'CursorMoved <buffer> '\n 'call denite#call_map("auto_action")')\n\n self._init_syntax()\n\n def _switch_buffer(self) -> None:\n split = self._context['split']\n if (split != 'no' and self._winid > 0 and\n self._vim.call('win_gotoid', self._winid)):\n if split != 'vertical' and not self._floating:\n # Move the window to bottom\n self._vim.command('wincmd J')\n self._winrestcmd = ''\n return\n\n self._floating = split in [\n 'floating',\n 'floating_relative_cursor',\n 'floating_relative_window',\n ]\n self._filter_floating = False\n\n if self._vim.current.buffer.options['filetype'] != 'denite':\n self._titlestring = self._vim.options['titlestring']\n\n command = 'edit'\n if split == 'tab':\n self._vim.command('tabnew')\n elif self._floating:\n self._split_floating(split)\n elif self._context['filter_split_direction'] == 'floating':\n self._filter_floating = True\n elif split != 'no':\n command = self._get_direction()\n command += ' vsplit' if split == 'vertical' else ' split'\n bufname = '[denite]-' + self._context['buffer_name']\n if self._vim.call('exists', '*bufadd'):\n bufnr = self._vim.call('bufadd', bufname)\n vertical = 'vertical' if split == 'vertical' else ''\n command = (\n 'buffer' if split\n in ['no', 'tab', 'floating',\n 'floating_relative_window',\n 'floating_relative_cursor'] else 'sbuffer')\n self._vim.command(\n 'silent keepalt %s %s %s %s' % (\n self._get_direction(),\n vertical,\n command,\n bufnr,\n )\n )\n else:\n self._vim.call(\n 'denite#util#execute_path',\n f'silent keepalt {command}', bufname)\n\n def _get_direction(self) -> str:\n direction = str(self._context['direction'])\n if direction == 'dynamictop' or direction == 'dynamicbottom':\n self._update_displayed_texts()\n winwidth = self._vim.call('winwidth', 0)\n is_fit = not [x for x in self._displayed_texts\n if self._vim.call('strwidth', x) > winwidth]\n if direction == 'dynamictop':\n direction = 'aboveleft' if is_fit else 'topleft'\n else:\n direction = 'belowright' if is_fit else 'botright'\n return direction\n\n def _get_wininfo(self) -> typing.List[typing.Any]:\n return [\n self._vim.options['columns'], self._vim.options['lines'],\n self._vim.call('win_getid'), self._vim.call('tabpagebuflist')\n ]\n\n def _switch_prev_buffer(self) -> None:\n if (self._prev_bufnr == self._bufnr or\n self._vim.buffers[self._prev_bufnr].name == ''):\n self._vim.command('enew')\n else:\n self._vim.command('buffer ' + str(self._prev_bufnr))\n\n def _init_syntax(self) -> None:\n self._vim.command('syntax case ignore')\n self._vim.command('highlight default link deniteInput ModeMsg')\n self._vim.command('highlight link deniteMatchedRange ' +\n self._context['highlight_matched_range'])\n self._vim.command('highlight link deniteMatchedChar ' +\n self._context['highlight_matched_char'])\n self._vim.command('highlight default link ' +\n 'deniteStatusLinePath Comment')\n self._vim.command('highlight default link ' +\n 'deniteStatusLineNumber LineNR')\n self._vim.command('highlight default link ' +\n 'deniteSelectedLine Statement')\n\n if self._floating:\n self._vim.current.window.options['winhighlight'] = (\n 'Normal:' + self._context['highlight_window_background']\n )\n self._vim.command(('syntax match deniteSelectedLine /^[%s].*/' +\n ' contains=deniteConcealedMark') % (\n self._context['selected_icon']))\n self._vim.command(('syntax match deniteConcealedMark /^[ %s]/' +\n ' conceal contained') % (\n self._context['selected_icon']))\n\n if self._denite:\n self._denite.init_syntax(self._context, self._is_multi)\n\n def _update_candidates(self) -> bool:\n if not self._denite:\n return False\n\n [self._is_async, pattern, statuses, self._entire_len,\n self._candidates] = self._denite.filter_candidates(self._context)\n\n prev_displayed_texts = self._displayed_texts\n self._update_displayed_texts()\n\n prev_matched_pattern = self._matched_pattern\n self._matched_pattern = pattern\n\n prev_statusline_sources = self._statusline_sources\n self._statusline_sources = ' '.join(statuses)\n\n if self._is_async:\n self._start_timer('update_candidates')\n else:\n self._stop_timer('update_candidates')\n\n updated = (self._displayed_texts != prev_displayed_texts or\n self._matched_pattern != prev_matched_pattern or\n self._statusline_sources != prev_statusline_sources)\n if updated:\n self._updated = True\n self._start_timer('update_buffer')\n\n if self._context['search'] and self._context['input']:\n self._vim.call('setreg', '/', self._context['input'])\n return self._updated\n\n def _update_displayed_texts(self) -> None:\n candidates_len = len(self._candidates)\n if not self._is_async and self._context['auto_resize']:\n winminheight = self._context['winminheight']\n max_height = min(self._context['winheight'],\n self._get_max_height())\n if (winminheight != -1 and candidates_len < winminheight):\n self._winheight = winminheight\n elif candidates_len > max_height:\n self._winheight = max_height\n elif candidates_len != self._winheight:\n self._winheight = candidates_len\n\n max_source_name_len = 0\n if self._candidates:\n max_source_name_len = max([\n len(self._get_display_source_name(x['source_name']))\n for x in self._candidates])\n self._context['max_source_name_len'] = max_source_name_len\n self._context['max_source_name_format'] = (\n '{:<' + str(self._context['max_source_name_len']) + '}')\n self._displayed_texts = [\n self._get_candidate_display_text(i)\n for i in range(0, candidates_len)\n ]\n\n def _update_buffer(self) -> None:\n is_current_buffer = self._bufnr == self._vim.current.buffer.number\n\n self._update_status()\n\n if self._check_matchdelete and self._context['match_highlight']:\n matches = [x['id'] for x in\n self._vim.call('getmatches', self._winid)]\n if self._matched_range_id in matches:\n self._vim.call('matchdelete',\n self._matched_range_id, self._winid)\n self._matched_range_id = -1\n if self._matched_char_id in matches:\n self._vim.call('matchdelete',\n self._matched_char_id, self._winid)\n self._matched_char_id = -1\n\n if self._matched_pattern != '':\n self._matched_range_id = self._vim.call(\n 'matchadd', 'deniteMatchedRange',\n r'\c' + regex_convert_py_vim(self._matched_pattern),\n 10, -1, {'window': self._winid})\n matched_char_pattern = '[{}]'.format(re.sub(\n r'([\[\]\\^-])',\n r'\\\1',\n self._context['input'].replace(' ', '')\n ))\n self._matched_char_id = self._vim.call(\n 'matchadd', 'deniteMatchedChar',\n matched_char_pattern,\n 10, -1, {'window': self._winid})\n\n prev_linenr = self._vim.call('line', '.')\n prev_candidate = self._get_cursor_candidate()\n\n buffer = self._vim.buffers[self._bufnr]\n buffer.options['modifiable'] = True\n self._vim.vars['denite#_candidates'] = [\n x['word'] for x in self._candidates]\n buffer[:] = self._displayed_texts\n buffer.options['modifiable'] = False\n\n self._previous_text = self._context['input']\n\n self._resize_buffer(is_current_buffer)\n\n is_changed = (self._context['reversed'] or\n (is_current_buffer and\n self._previous_text != self._context['input']))\n if self._updated and is_changed:\n if not is_current_buffer:\n save_winid = self._vim.call('win_getid')\n self._vim.call('win_gotoid', self._winid)\n self._init_cursor()\n self._move_to_pos(self._cursor)\n if not is_current_buffer:\n self._vim.call('win_gotoid', save_winid)\n elif is_current_buffer:\n self._vim.call('cursor', [prev_linenr, 0])\n\n if is_current_buffer:\n if (self._context['auto_action'] and\n prev_candidate != self._get_cursor_candidate()):\n self.do_action(self._context['auto_action'])\n\n self._updated = False\n self._stop_timer('update_buffer')\n\n def _update_status(self) -> None:\n inpt = ''\n if self._context['input']:\n inpt = self._context['input'] + ' '\n if self._context['error_messages']:\n inpt = '[ERROR] ' + inpt\n path = '[' + self._context['path'] + ']'\n\n status = {\n 'input': inpt,\n 'sources': self._statusline_sources,\n 'path': path,\n # Extra\n 'buffer_name': self._context['buffer_name'],\n 'line_total': len(self._candidates),\n }\n if status == self._prev_status:\n return\n\n self._bufvars['denite_statusline'] = status\n self._prev_status = status\n\n linenr = "printf('%'.(len(line('$'))+2).'d/%d',line('.'),line('$'))"\n\n if self._context['statusline']:\n if self._floating or self._filter_floating:\n self._vim.options['titlestring'] = (\n "%{denite#get_status('input')}%* " +\n "%{denite#get_status('sources')} " +\n " %{denite#get_status('path')}%*" +\n "%{" + linenr + "}%*")\n else:\n winnr = self._vim.call('win_id2win', self._winid)\n self._vim.call('setwinvar', winnr, '&statusline', (\n "%#deniteInput#%{denite#get_status('input')}%* " +\n "%{denite#get_status('sources')} %=" +\n "%#deniteStatusLinePath# %{denite#get_status('path')}%*" +\n "%#deniteStatusLineNumber#%{" + linenr + "}%*"))\n\n def _get_display_source_name(self, name: str) -> str:\n source_names = self._context['source_names']\n if not self._is_multi or source_names == 'hide':\n source_name = ''\n else:\n short_name = (re.sub(r'([a-zA-Z])[a-zA-Z]+', r'\1', name)\n if re.search(r'[^a-zA-Z]', name) else name[:2])\n source_name = short_name if source_names == 'short' else name\n return source_name\n\n def _get_candidate_display_text(self, index: int) -> str:\n source_names = self._context['source_names']\n candidate = self._candidates[index]\n terms = []\n if self._is_multi and source_names != 'hide':\n terms.append(self._context['max_source_name_format'].format(\n self._get_display_source_name(candidate['source_name'])))\n encoding = self._context['encoding']\n abbr = candidate.get('abbr', candidate['word']).encode(\n encoding, errors='replace').decode(encoding, errors='replace')\n terms.append(abbr[:int(self._context['max_candidate_width'])])\n return (str(self._context['selected_icon'])\n if index in self._selected_candidates\n else ' ') + ' '.join(terms).replace('\n', '')\n\n def _get_max_height(self) -> int:\n return int(self._vim.options['lines']) if not self._floating else (\n int(self._vim.options['lines']) -\n int(self._context['winrow']) -\n int(self._vim.options['cmdheight']))\n\n def _resize_buffer(self, is_current_buffer: bool) -> None:\n split = self._context['split']\n if (split == 'no' or split == 'tab' or\n self._vim.call('winnr', '$') == 1):\n return\n\n winheight = max(self._winheight, 1)\n winwidth = max(self._winwidth, 1)\n is_vertical = split == 'vertical'\n\n if not is_current_buffer:\n restore = self._vim.call('win_getid')\n self._vim.call('win_gotoid', self._winid)\n\n if not is_vertical and self._vim.current.window.height != winheight:\n if self._floating:\n wincol = self._context['winrow']\n row = wincol\n if split == 'floating':\n if self._context['auto_resize'] and row > 1:\n row += self._context['winheight']\n row -= self._winheight\n self._vim.call('nvim_win_set_config', self._winid, {\n 'relative': 'editor',\n 'row': row,\n 'col': self._context['wincol'],\n 'width': winwidth,\n 'height': winheight,\n })\n filter_row = 0 if wincol == 1 else row + winheight\n filter_col = self._context['wincol']\n else:\n init_pos = self._vim.call('nvim_win_get_config',\n self._winid)\n self._vim.call('nvim_win_set_config', self._winid, {\n 'relative': 'win',\n 'win': init_pos['win'],\n 'row': init_pos['row'],\n 'col': init_pos['col'],\n 'width': winwidth,\n 'height': winheight,\n })\n filter_col = init_pos['col']\n if init_pos['anchor'] == 'NW':\n winpos = self._vim.call('nvim_win_get_position',\n self._winid)\n filter_row = winpos[0] + winheight\n\n filter_winid = self._vim.vars['denite#_filter_winid']\n self._context['filter_winrow'] = row\n if self._vim.call('win_id2win', filter_winid) > 0:\n self._vim.call('nvim_win_set_config', filter_winid, {\n 'relative': 'editor',\n 'row': filter_row,\n 'col': filter_col,\n })\n\n self._vim.command('resize ' + str(winheight))\n if self._context['reversed']:\n self._vim.command('normal! zb')\n elif is_vertical and self._vim.current.window.width != winwidth:\n self._vim.command('vertical resize ' + str(winwidth))\n\n if not is_current_buffer:\n self._vim.call('win_gotoid', restore)\n\n def _check_do_option(self) -> bool:\n if self._context['do'] != '':\n self._do_command(self._context['do'])\n return True\n elif (self._candidates and self._context['immediately'] or\n len(self._candidates) == 1 and self._context['immediately_1']):\n self._do_immediately()\n return True\n return not (self._context['empty'] or\n self._is_async or self._candidates)\n\n def _check_move_option(self) -> None:\n if self._context['cursor_pos'].isnumeric():\n self._cursor = int(self._context['cursor_pos']) + 1\n elif re.match(r'\+\d+', self._context['cursor_pos']):\n for _ in range(int(self._context['cursor_pos'][1:])):\n self._move_to_next_line()\n elif re.match(r'-\d+', self._context['cursor_pos']):\n for _ in range(int(self._context['cursor_pos'][1:])):\n self._move_to_prev_line()\n elif self._context['cursor_pos'] == '$':\n self._move_to_last_line()\n\n def _do_immediately(self) -> None:\n goto = self._winid > 0 and self._vim.call(\n 'win_gotoid', self._winid)\n if goto:\n # Jump to denite window\n self._init_buffer()\n self.do_action('default')\n candidate = self._get_cursor_candidate()\n if not candidate:\n return\n echo(self._vim, 'Normal', '[{}/{}] {}'.format(\n self._cursor, len(self._candidates),\n candidate.get('abbr', candidate['word'])))\n if goto:\n # Move to the previous window\n self._vim.command('wincmd p')\n\n def _do_command(self, command: str) -> None:\n self._init_cursor()\n cursor = 1\n while cursor < len(self._candidates):\n self.do_action('default', command)\n self._move_to_next_line()\n self._quit_buffer()\n\n def _cleanup(self) -> None:\n self._stop_timer('update_candidates')\n self._stop_timer('update_buffer')\n\n if self._vim.current.buffer.number == self._bufnr:\n self._cursor = self._vim.call('line', '.')\n\n # Note: Close filter window before preview window\n self._vim.call('denite#filter#_close_filter_window')\n if not self._context['has_preview_window']:\n self._vim.command('pclose!')\n # Clear previewed buffers\n for bufnr in self._vim.vars['denite#_previewed_buffers'].keys():\n if not self._vim.call('win_findbuf', bufnr):\n self._vim.command('silent bdelete ' + str(bufnr))\n self._vim.vars['denite#_previewed_buffers'] = {}\n\n self._vim.command('highlight! link CursorLine CursorLine')\n if self._floating or self._filter_floating:\n self._vim.options['titlestring'] = self._titlestring\n self._vim.options['ruler'] = self._ruler\n\n def _close_current_window(self) -> None:\n if self._vim.call('winnr', '$') == 1:\n self._vim.command('buffer #')\n else:\n self._vim.command('close!')\n\n def _quit_buffer(self) -> None:\n self._cleanup()\n if self._vim.call('bufwinnr', self._bufnr) < 0:\n # Denite buffer is already closed\n return\n\n winids = self._vim.call('win_findbuf',\n self._vim.vars['denite#_filter_bufnr'])\n if winids:\n # Quit filter buffer\n self._vim.call('win_gotoid', winids[0])\n self._close_current_window()\n # Move to denite window\n self._vim.call('win_gotoid', self._winid)\n\n # Restore the window\n if self._context['split'] == 'no':\n self._switch_prev_buffer()\n for k, v in self._save_window_options.items():\n self._vim.current.window.options[k] = v\n else:\n if self._context['split'] == 'tab':\n self._vim.command('tabclose!')\n\n if self._context['split'] != 'tab':\n self._close_current_window()\n\n self._vim.call('win_gotoid', self._prev_winid)\n\n # Restore the position\n self._vim.call('setpos', '.', self._prev_curpos)\n\n if self._get_wininfo() and self._get_wininfo() == self._prev_wininfo:\n # Note: execute restcmd twice to restore layout properly\n self._vim.command(self._winrestcmd)\n self._vim.command(self._winrestcmd)\n\n clearmatch(self._vim)\n\n def _get_cursor_candidate(self) -> Candidate:\n return self._get_candidate(self._cursor)\n\n def _get_candidate(self, pos: int) -> Candidate:\n if not self._candidates or pos > len(self._candidates):\n return {}\n return self._candidates[pos - 1]\n\n def _get_selected_candidates(self) -> Candidates:\n if not self._selected_candidates:\n return [self._get_cursor_candidate()\n ] if self._get_cursor_candidate() else []\n return [self._candidates[x] for x in self._selected_candidates]\n\n def _init_denite(self) -> None:\n if self._denite:\n self._denite.start(self._context)\n self._denite.on_init(self._context)\n self._initialized = True\n self._winheight = self._context['winheight']\n self._winwidth = self._context['winwidth']\n\n def _gather_candidates(self) -> None:\n self._selected_candidates = []\n if self._denite:\n self._denite.gather_candidates(self._context)\n\n def _init_cursor(self) -> None:\n if self._context['reversed']:\n self._move_to_last_line()\n else:\n self._move_to_first_line()\n\n def _move_to_pos(self, pos: int) -> None:\n self._vim.call('cursor', pos, 0)\n self._cursor = pos\n\n if self._context['reversed']:\n self._vim.command('normal! zb')\n\n def _move_to_next_line(self) -> None:\n if self._cursor < len(self._candidates):\n self._cursor += 1\n\n def _move_to_prev_line(self) -> None:\n if self._cursor >= 1:\n self._cursor -= 1\n\n def _move_to_first_line(self) -> None:\n self._cursor = 1\n\n def _move_to_last_line(self) -> None:\n self._cursor = len(self._candidates)\n\n def _start_timer(self, key: str) -> None:\n if key in self._timers:\n return\n\n if key == 'update_candidates':\n self._timers[key] = self._vim.call(\n 'denite#helper#_start_update_candidates_timer', self._bufnr)\n elif key == 'update_buffer':\n self._timers[key] = self._vim.call(\n 'denite#helper#_start_update_buffer_timer', self._bufnr)\n\n def _stop_timer(self, key: str) -> None:\n if key not in self._timers:\n return\n\n self._vim.call('timer_stop', self._timers[key])\n\n # Note: After timer_stop is called, self._timers may be removed\n if key in self._timers:\n self._timers.pop(key)\n\n def _split_floating(self, split: str) -> None:\n # Use floating window\n if split == 'floating':\n self._vim.call(\n 'nvim_open_win',\n self._vim.call('bufnr', '%'), True, {\n 'relative': 'editor',\n 'row': self._context['winrow'],\n 'col': self._context['wincol'],\n 'width': self._context['winwidth'],\n 'height': self._context['winheight'],\n })\n elif split == 'floating_relative_cursor':\n opened_pos = (self._vim.call('nvim_win_get_position', 0)[0] +\n self._vim.call('winline') - 1)\n if self._context['auto_resize']:\n height = max(self._winheight, 1)\n width = max(self._winwidth, 1)\n else:\n width = self._context['winwidth']\n height = self._context['winheight']\n\n if opened_pos + height + 3 > self._vim.options['lines']:\n anchor = 'SW'\n row = 0\n self._context['filter_winrow'] = row + opened_pos\n else:\n anchor = 'NW'\n row = 1\n self._context['filter_winrow'] = row + height + opened_pos\n self._vim.call(\n 'nvim_open_win',\n self._vim.call('bufnr', '%'), True, {\n 'relative': 'cursor',\n 'row': row,\n 'col': 0,\n 'width': width,\n 'height': height,\n 'anchor': anchor,\n })\n elif split == 'floating_relative_window':\n self._vim.call(\n 'nvim_open_win',\n self._vim.call('bufnr', '%'), True, {\n 'relative': 'win',\n 'row': self._context['winrow'],\n 'col': self._context['wincol'],\n 'width': self._context['winwidth'],\n 'height': self._context['winheight'],\n })\n37.816153790.548630
0d99f875863138f11af1d76f0c753c198ad6d96bd1329pyPythonPyDSTool/core/context_managers.pyyuanz271/PyDSTool886c143cdd192aea204285f3a1cb4968c763c646Python-2.0NoneNoneNonePyDSTool/core/context_managers.pyyuanz271/PyDSTool886c143cdd192aea204285f3a1cb4968c763c646Python-2.0NoneNoneNonePyDSTool/core/context_managers.pyyuanz271/PyDSTool886c143cdd192aea204285f3a1cb4968c763c646Python-2.0NoneNoneNone# -*- coding: utf-8 -*-\n\n"""Context managers implemented for (mostly) internal use"""\n\nimport contextlib\nimport functools\nfrom io import UnsupportedOperation\nimport os\nimport sys\n\n\n__all__ = ["RedirectStdout", "RedirectStderr"]\n\n\n@contextlib.contextmanager\ndef _stdchannel_redirected(stdchannel, dest_filename, mode="w"):\n """\n A context manager to temporarily redirect stdout or stderr\n\n Originally by Marc Abramowitz, 2013\n (http://marc-abramowitz.com/archives/2013/07/19/python-context-manager-for-redirected-stdout-and-stderr/)\n """\n\n oldstdchannel = None\n dest_file = None\n try:\n if stdchannel is None:\n yield iter([None])\n else:\n oldstdchannel = os.dup(stdchannel.fileno())\n dest_file = open(dest_filename, mode)\n os.dup2(dest_file.fileno(), stdchannel.fileno())\n yield\n except (UnsupportedOperation, AttributeError):\n yield iter([None])\n finally:\n if oldstdchannel is not None:\n os.dup2(oldstdchannel, stdchannel.fileno())\n if dest_file is not None:\n dest_file.close()\n\n\nRedirectStdout = functools.partial(_stdchannel_redirected, sys.stdout)\nRedirectStderr = functools.partial(_stdchannel_redirected, sys.stderr)\nRedirectNoOp = functools.partial(_stdchannel_redirected, None, "")\n28.8913041090.689240
1d99f875863138f11af1d76f0c753c198ad6d96bd1329pyPythonPyDSTool/core/context_managers.pyyuanz271/PyDSTool886c143cdd192aea204285f3a1cb4968c763c646OLDAP-2.7NoneNoneNonePyDSTool/core/context_managers.pyyuanz271/PyDSTool886c143cdd192aea204285f3a1cb4968c763c646OLDAP-2.7NoneNoneNonePyDSTool/core/context_managers.pyyuanz271/PyDSTool886c143cdd192aea204285f3a1cb4968c763c646OLDAP-2.7NoneNoneNone# -*- coding: utf-8 -*-\n\n"""Context managers implemented for (mostly) internal use"""\n\nimport contextlib\nimport functools\nfrom io import UnsupportedOperation\nimport os\nimport sys\n\n\n__all__ = ["RedirectStdout", "RedirectStderr"]\n\n\n@contextlib.contextmanager\ndef _stdchannel_redirected(stdchannel, dest_filename, mode="w"):\n """\n A context manager to temporarily redirect stdout or stderr\n\n Originally by Marc Abramowitz, 2013\n (http://marc-abramowitz.com/archives/2013/07/19/python-context-manager-for-redirected-stdout-and-stderr/)\n """\n\n oldstdchannel = None\n dest_file = None\n try:\n if stdchannel is None:\n yield iter([None])\n else:\n oldstdchannel = os.dup(stdchannel.fileno())\n dest_file = open(dest_filename, mode)\n os.dup2(dest_file.fileno(), stdchannel.fileno())\n yield\n except (UnsupportedOperation, AttributeError):\n yield iter([None])\n finally:\n if oldstdchannel is not None:\n os.dup2(oldstdchannel, stdchannel.fileno())\n if dest_file is not None:\n dest_file.close()\n\n\nRedirectStdout = functools.partial(_stdchannel_redirected, sys.stdout)\nRedirectStderr = functools.partial(_stdchannel_redirected, sys.stderr)\nRedirectNoOp = functools.partial(_stdchannel_redirected, None, "")\n28.8913041090.689240
0d99ff34b5f61cee604590c456f40398d7da181823215pyPythonpos_kiosk/hooks.pyMuzzy73/pos_kiosk1ed42cfaeb15f009293b76d05dd85bd322b42f03MIT12022-03-05T11:42:36.000Z2022-03-05T11:42:36.000Zpos_kiosk/hooks.pyMuzzy73/pos_kiosk1ed42cfaeb15f009293b76d05dd85bd322b42f03MITNoneNoneNonepos_kiosk/hooks.pyMuzzy73/pos_kiosk1ed42cfaeb15f009293b76d05dd85bd322b42f03MIT12022-03-05T11:42:37.000Z2022-03-05T11:42:37.000Z# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\nfrom . import __version__ as app_version\n\napp_name = "pos_kiosk"\napp_title = "Pos Kiosk"\napp_publisher = "9t9it"\napp_description = "Kiosk App"\napp_icon = "octicon octicon-file-directory"\napp_color = "grey"\napp_email = "info@9t9it.com"\napp_license = "MIT"\n\n# Includes in <head>\n# ------------------\n\n# include js, css files in header of desk.html\n# app_include_css = "/assets/pos_kiosk/css/pos_kiosk.css"\n# app_include_js = "/assets/pos_kiosk/js/pos_kiosk.js"\n\n# include js, css files in header of web template\n# web_include_css = "/assets/pos_kiosk/css/pos_kiosk.css"\n# web_include_js = "/assets/pos_kiosk/js/pos_kiosk.js"\n\n# include js in page\n# page_js = {"page" : "public/js/file.js"}\n# page_js = {\n# "kiosk": ["public/js/pos_page_js.js", "public/js/includes/number_to_words.js"]\n# }\n\n# include js in doctype views\n# doctype_js = {"doctype" : "public/js/doctype.js"}\n# doctype_list_js = {"doctype" : "public/js/doctype_list.js"}\n# doctype_tree_js = {"doctype" : "public/js/doctype_tree.js"}\n# doctype_calendar_js = {"doctype" : "public/js/doctype_calendar.js"}\nfixtures = [\n {\n "doctype": "Custom Field",\n "filters": [\n [\n "name",\n "in",\n [\n "Sales Invoice Item-pos_kiosk",\n "Mode of Payment-logo"\n ]\n ]\n ]\n }\n]\n# Home Pages\n# ----------\n\n# application home page (will override Website Settings)\n# home_page = "login"\n\n# website user home page (by Role)\n# role_home_page = {\n#\t"Role": "home_page"\n# }\n\n# Website user home page (by function)\n# get_website_user_home_page = "pos_kiosk.utils.get_home_page"\n\n# Generators\n# ----------\n\n# automatically create page for each record of this doctype\n# website_generators = ["Web Page"]\n\n# Installation\n# ------------\n\n# before_install = "pos_kiosk.install.before_install"\n# after_install = "pos_kiosk.install.after_install"\n\n# Desk Notifications\n# ------------------\n# See frappe.core.notifications.get_notification_config\n\n# notification_config = "pos_kiosk.notifications.get_notification_config"\n\n# Permissions\n# -----------\n# Permissions evaluated in scripted ways\n\n# permission_query_conditions = {\n# \t"Event": "frappe.desk.doctype.event.event.get_permission_query_conditions",\n# }\n#\n# has_permission = {\n# \t"Event": "frappe.desk.doctype.event.event.has_permission",\n# }\n\n# Document Events\n# ---------------\n# Hook on document methods and events\n\n# doc_events = {\n# \t"*": {\n# \t\t"on_update": "method",\n# \t\t"on_cancel": "method",\n# \t\t"on_trash": "method"\n#\t}\n# }\n\n# Scheduled Tasks\n# ---------------\n\n# scheduler_events = {\n# \t"all": [\n# \t\t"pos_kiosk.tasks.all"\n# \t],\n# \t"daily": [\n# \t\t"pos_kiosk.tasks.daily"\n# \t],\n# \t"hourly": [\n# \t\t"pos_kiosk.tasks.hourly"\n# \t],\n# \t"weekly": [\n# \t\t"pos_kiosk.tasks.weekly"\n# \t]\n# \t"monthly": [\n# \t\t"pos_kiosk.tasks.monthly"\n# \t]\n# }\n\n# Testing\n# -------\n\n# before_tests = "pos_kiosk.install.before_tests"\n\n# Overriding Whitelisted Methods\n# ------------------------------\n#\n# override_whitelisted_methods = {\n# \t"pos_bahrain.api.get_item_details.get_item_details": "pos_kiosk.api.item.get_item_details" # noqa\n# }\n\n22.9642861010.631415
0d9a00b2c6f1a0e88ad5b4a7def2a45bd074f417f3880pyPythonpypagai/models/model_lstm.pygcouti/pypagAId08fac95361dcc036d890a88cb86ce090322a612Apache-2.012018-07-24T18:53:26.000Z2018-07-24T18:53:26.000Zpypagai/models/model_lstm.pygcouti/pypagAId08fac95361dcc036d890a88cb86ce090322a612Apache-2.072020-01-28T21:45:14.000Z2022-03-11T23:20:53.000Zpypagai/models/model_lstm.pygcouti/pypagAId08fac95361dcc036d890a88cb86ce090322a612Apache-2.0NoneNoneNonefrom keras import Model, Input\nfrom keras.layers import Dense, concatenate, LSTM, Reshape, Permute, Embedding, Dropout, Convolution1D, Flatten\nfrom keras.optimizers import Adam\n\nfrom pypagai.models.base import KerasModel\n\n\nclass SimpleLSTM(KerasModel):\n """\n Use a simple lstm neural network\n """\n @staticmethod\n def default_config():\n config = KerasModel.default_config()\n config['hidden'] = 32\n\n return config\n\n def __init__(self, cfg):\n super().__init__(cfg)\n self._cfg_ = cfg\n\n def _create_network_(self):\n hidden = self._cfg_['hidden']\n story = Input((self._story_maxlen, ), name='story')\n question = Input((self._query_maxlen, ), name='question')\n\n conc = concatenate([story, question],)\n conc = Reshape((1, int(conc.shape[1])))(conc)\n conc = Permute((2, 1))(conc)\n\n response = LSTM(hidden, dropout=0.2, recurrent_dropout=0.2)(conc)\n response = Dense(self._vocab_size, activation='softmax')(response)\n\n self._model = Model(inputs=[story, question], outputs=response)\n self._model.compile(optimizer=Adam(lr=2e-4), loss='sparse_categorical_crossentropy', metrics=['accuracy'])\n\n\nclass EmbedLSTM(KerasModel):\n\n """\n Use a simple lstm neural network\n """\n @staticmethod\n def default_config():\n config = KerasModel.default_config()\n config['hidden'] = 32\n\n return config\n\n def __init__(self, cfg):\n super().__init__(cfg)\n self._cfg_ = cfg\n\n def _create_network_(self):\n hidden = self._cfg_['hidden']\n\n story = Input((self._story_maxlen, ), name='story')\n question = Input((self._query_maxlen, ), name='question')\n\n eb_story = Embedding(self._vocab_size, 64)(story)\n eb_story = Dropout(0.3)(eb_story)\n\n eb_question = Embedding(self._vocab_size, 64)(question)\n eb_question = Dropout(0.3)(eb_question)\n\n conc = concatenate([eb_story, eb_question], axis=1)\n\n response = LSTM(hidden, dropout=0.2, recurrent_dropout=0.2)(conc)\n response = Dense(self._vocab_size, activation='softmax')(response)\n\n self._model = Model(inputs=[story, question], outputs=response)\n self._model.compile(optimizer=Adam(lr=2e-4), loss='sparse_categorical_crossentropy', metrics=['accuracy'])\n\n\nclass ConvLSTM(KerasModel):\n\n """\n Use a simple lstm neural network\n """\n @staticmethod\n def default_config():\n config = KerasModel.default_config()\n config['hidden'] = 32\n\n return config\n\n def __init__(self, model_cfg):\n super().__init__(model_cfg)\n self._cfg = model_cfg\n\n def _create_network_(self):\n hidden = self._cfg['hidden']\n\n story = Input((self._story_maxlen, ), name='story')\n question = Input((self._query_maxlen, ), name='question')\n\n eb_story = Embedding(self._vocab_size, 64)(story)\n eb_story = Convolution1D(64, 3, padding='same')(eb_story)\n eb_story = Convolution1D(32, 3, padding='same')(eb_story)\n eb_story = Convolution1D(16, 3, padding='same')(eb_story)\n # eb_story = Flatten()(eb_story)\n\n eb_question = Embedding(self._vocab_size, 64)(question)\n eb_question = Convolution1D(64, 3, padding='same')(eb_question)\n eb_question = Convolution1D(32, 3, padding='same')(eb_question)\n eb_question = Convolution1D(16, 3, padding='same')(eb_question)\n # eb_question = Flatten()(eb_question)\n\n conc = concatenate([eb_story, eb_question], axis=1)\n\n response = LSTM(hidden, dropout=0.2, recurrent_dropout=0.2)(conc)\n response = Dense(self._vocab_size, activation='softmax')(response)\n\n self._model = Model(inputs=[story, question], outputs=response)\n self._model.compile(optimizer=Adam(lr=2e-4), loss='sparse_categorical_crossentropy', metrics=['accuracy'])\n33.1623931140.650773
0d9a09cb6f497e8ccdf9de40f4b8ebd6b96a1c43a113pyPythonlib/variables/latent_variables/__init__.pyjoelouismarino/variational_rl11dc14bfb56f3ebbfccd5de206b78712a8039a9aMIT152020-10-20T22:09:36.000Z2021-12-24T13:40:36.000Zlib/variables/latent_variables/__init__.pyjoelouismarino/variational_rl11dc14bfb56f3ebbfccd5de206b78712a8039a9aMITNoneNoneNonelib/variables/latent_variables/__init__.pyjoelouismarino/variational_rl11dc14bfb56f3ebbfccd5de206b78712a8039a9aMIT12020-10-23T19:48:06.000Z2020-10-23T19:48:06.000Zfrom .fully_connected import FullyConnectedLatentVariable\nfrom .convolutional import ConvolutionalLatentVariable\n37.666667570.911504
hexshasizeextlangmax_stars_repo_pathmax_stars_repo_namemax_stars_repo_head_hexshamax_stars_repo_licensesmax_stars_countmax_stars_repo_stars_event_min_datetimemax_stars_repo_stars_event_max_datetimemax_issues_repo_pathmax_issues_repo_namemax_issues_repo_head_hexshamax_issues_repo_licensesmax_issues_countmax_issues_repo_issues_event_min_datetimemax_issues_repo_issues_event_max_datetimemax_forks_repo_pathmax_forks_repo_namemax_forks_repo_head_hexshamax_forks_repo_licensesmax_forks_countmax_forks_repo_forks_event_min_datetimemax_forks_repo_forks_event_max_datetimecontentavg_line_lengthmax_line_lengthalphanum_fraction
06ae97714d8f4b22a1d08d058e87732477cbb19c09424pyPythonclif/pybind11/generator.pysnu5mumr1k/clif3a907dd7b0986f2b3306c88503d414f4d4f963aeApache-2.0NoneNoneNoneclif/pybind11/generator.pysnu5mumr1k/clif3a907dd7b0986f2b3306c88503d414f4d4f963aeApache-2.0NoneNoneNoneclif/pybind11/generator.pysnu5mumr1k/clif3a907dd7b0986f2b3306c88503d414f4d4f963aeApache-2.0NoneNoneNone# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the "License");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n"""Generates pybind11 bindings code."""\n\nfrom typing import Dict, Generator, List, Text, Set\n\nfrom clif.protos import ast_pb2\nfrom clif.pybind11 import classes\nfrom clif.pybind11 import enums\nfrom clif.pybind11 import function\nfrom clif.pybind11 import function_lib\nfrom clif.pybind11 import type_casters\nfrom clif.pybind11 import utils\n\nI = utils.I\n\n\nclass ModuleGenerator(object):\n """A class that generates pybind11 bindings code from CLIF ast."""\n\n def __init__(self, ast: ast_pb2.AST, module_name: str, header_path: str,\n include_paths: List[str]):\n self._ast = ast\n self._module_name = module_name\n self._header_path = header_path\n self._include_paths = include_paths\n self._unique_classes = {}\n\n def generate_header(self,\n ast: ast_pb2.AST) -> Generator[str, None, None]:\n """Generates pybind11 bindings code from CLIF ast."""\n includes = set()\n for decl in ast.decls:\n includes.add(decl.cpp_file)\n self._collect_class_cpp_names(decl)\n yield '#include "third_party/pybind11/include/pybind11/smart_holder.h"'\n for include in includes:\n yield f'#include "{include}"'\n yield '\n'\n for cpp_name in self._unique_classes:\n yield f'PYBIND11_SMART_HOLDER_TYPE_CASTERS({cpp_name})'\n yield '\n'\n for cpp_name, py_name in self._unique_classes.items():\n yield f'// CLIF use `{cpp_name}` as {py_name}'\n\n def generate_from(self, ast: ast_pb2.AST):\n """Generates pybind11 bindings code from CLIF ast.\n\n Args:\n ast: CLIF ast protobuf.\n\n Yields:\n Generated pybind11 bindings code.\n """\n yield from self._generate_headlines()\n\n # Find and keep track of virtual functions.\n python_override_class_names = {}\n\n for decl in ast.decls:\n yield from self._generate_python_override_class_names(\n python_override_class_names, decl)\n self._collect_class_cpp_names(decl)\n\n yield from type_casters.generate_from(ast, self._include_paths)\n yield f'PYBIND11_MODULE({self._module_name}, m) {{'\n yield from self._generate_import_modules(ast)\n yield I+('m.doc() = "CLIF-generated pybind11-based module for '\n f'{ast.source}";')\n yield I + 'py::google::ImportStatusModule();'\n\n for decl in ast.decls:\n if decl.decltype == ast_pb2.Decl.Type.FUNC:\n for s in function.generate_from('m', decl.func, None):\n yield I + s\n elif decl.decltype == ast_pb2.Decl.Type.CONST:\n yield from self._generate_const_variables(decl.const)\n elif decl.decltype == ast_pb2.Decl.Type.CLASS:\n yield from classes.generate_from(\n decl.class_, 'm',\n python_override_class_names.get(decl.class_.name.cpp_name, ''))\n elif decl.decltype == ast_pb2.Decl.Type.ENUM:\n yield from enums.generate_from('m', decl.enum)\n yield ''\n yield '}'\n\n def _generate_import_modules(self,\n ast: ast_pb2.AST) -> Generator[str, None, None]:\n for include in ast.pybind11_includes:\n # Converts `full/project/path/cheader_pybind11_clif.h` to\n # `full.project.path.cheader_pybind11`\n names = include.split('/')\n names.insert(0, 'google3')\n names[-1] = names[-1][:-len('_clif.h')]\n module = '.'.join(names)\n yield f'py::module_::import("{module}");'\n\n def _generate_headlines(self):\n """Generates #includes and headers."""\n includes = set()\n for decl in self._ast.decls:\n includes.add(decl.cpp_file)\n if decl.decltype == ast_pb2.Decl.Type.CONST:\n self._generate_const_variables_headers(decl.const, includes)\n for include in self._ast.pybind11_includes:\n includes.add(include)\n for include in self._ast.usertype_includes:\n includes.add(include)\n yield '#include "third_party/pybind11/include/pybind11/complex.h"'\n yield '#include "third_party/pybind11/include/pybind11/functional.h"'\n yield '#include "third_party/pybind11/include/pybind11/operators.h"'\n yield '#include "third_party/pybind11/include/pybind11/smart_holder.h"'\n yield '// potential future optimization: generate this line only as needed.'\n yield '#include "third_party/pybind11/include/pybind11/stl.h"'\n yield ''\n yield '#include "clif/pybind11/runtime.h"'\n yield '#include "clif/pybind11/type_casters.h"'\n yield ''\n for include in includes:\n yield f'#include "{include}"'\n yield f'#include "{self._header_path}"'\n yield ''\n yield 'namespace py = pybind11;'\n yield ''\n\n def _generate_const_variables_headers(self, const_decl: ast_pb2.ConstDecl,\n includes: Set[str]):\n if const_decl.type.lang_type == 'complex':\n includes.add('third_party/pybind11/include/pybind11/complex.h')\n if (const_decl.type.lang_type.startswith('list<') or\n const_decl.type.lang_type.startswith('dict<') or\n const_decl.type.lang_type.startswith('set<')):\n includes.add('third_party/pybind11/include/pybind11/stl.h')\n\n def _generate_const_variables(self, const_decl: ast_pb2.ConstDecl):\n """Generates variables."""\n lang_type = const_decl.type.lang_type\n\n if (lang_type in {'int', 'float', 'double', 'bool', 'str'} or\n lang_type.startswith('tuple<')):\n const_def = I + (f'm.attr("{const_decl.name.native}") = '\n f'{const_decl.name.cpp_name};')\n else:\n const_def = I + (f'm.attr("{const_decl.name.native}") = '\n f'py::cast({const_decl.name.cpp_name});')\n\n yield const_def\n\n def _generate_python_override_class_names(\n self, python_override_class_names: Dict[Text, Text], decl: ast_pb2.Decl,\n trampoline_name_suffix: str = '_trampoline',\n self_life_support: str = 'py::trampoline_self_life_support'):\n """Generates Python overrides classes dictionary for virtual functions."""\n if decl.decltype == ast_pb2.Decl.Type.CLASS:\n virtual_members = []\n for member in decl.class_.members:\n if member.decltype == ast_pb2.Decl.Type.FUNC and member.func.virtual:\n virtual_members.append(member)\n if not virtual_members:\n return\n python_override_class_name = (\n f'{decl.class_.name.native}_{trampoline_name_suffix}')\n assert decl.class_.name.cpp_name not in python_override_class_names\n python_override_class_names[\n decl.class_.name.cpp_name] = python_override_class_name\n yield (f'struct {python_override_class_name} : '\n f'{decl.class_.name.cpp_name}, {self_life_support} {{')\n yield I + (\n f'using {decl.class_.name.cpp_name}::{decl.class_.name.native};')\n for member in virtual_members:\n yield from self._generate_virtual_function(\n decl.class_.name.native, member.func)\n if python_override_class_name:\n yield '};'\n\n def _generate_virtual_function(self,\n class_name: str, func_decl: ast_pb2.FuncDecl):\n """Generates virtual function overrides calling Python methods."""\n return_type = ''\n if func_decl.cpp_void_return:\n return_type = 'void'\n elif func_decl.returns:\n for v in func_decl.returns:\n if v.HasField('cpp_exact_type'):\n return_type = v.cpp_exact_type\n\n params = ', '.join([f'{p.name.cpp_name}' for p in func_decl.params])\n params_list_with_types = []\n for p in func_decl.params:\n params_list_with_types.append(\n f'{function_lib.generate_param_type(p)} {p.name.cpp_name}')\n params_str_with_types = ', '.join(params_list_with_types)\n\n cpp_const = ''\n if func_decl.cpp_const_method:\n cpp_const = ' const'\n\n yield I + (f'{return_type} '\n f'{func_decl.name.native}({params_str_with_types}) '\n f'{cpp_const} override {{')\n\n if func_decl.is_pure_virtual:\n pybind11_override = 'PYBIND11_OVERRIDE_PURE'\n else:\n pybind11_override = 'PYBIND11_OVERRIDE'\n\n yield I + I + f'{pybind11_override}('\n yield I + I + I + f'{return_type},'\n yield I + I + I + f'{class_name},'\n yield I + I + I + f'{func_decl.name.native},'\n yield I + I + I + f'{params}'\n yield I + I + ');'\n yield I + '}'\n\n def _collect_class_cpp_names(self, decl: ast_pb2.Decl,\n parent_name: str = '') -> None:\n """Adds every class name to a set. Only to be used in this context."""\n if decl.decltype == ast_pb2.Decl.Type.CLASS:\n full_native_name = decl.class_.name.native\n if parent_name:\n full_native_name = '.'.join([parent_name, decl.class_.name.native])\n self._unique_classes[decl.class_.name.cpp_name] = full_native_name\n for member in decl.class_.members:\n self._collect_class_cpp_names(member, full_native_name)\n\n\ndef write_to(channel, lines):\n """Writes the generated code to files."""\n for s in lines:\n channel.write(s)\n channel.write('\n')\n38.622951800.671053
06aea2be020c7e8aa245e0f3059dcd2d6daefd1b72865pyPythonadvent/model/discriminator.pyChristopheGraveline064/ADVENTfc0ecd099862ed68979b2197423f1bb34df09c74Apache-2.012021-01-17T06:02:10.000Z2021-01-17T06:02:10.000Zadvent/model/discriminator.pyChristopheGraveline064/ADVENTfc0ecd099862ed68979b2197423f1bb34df09c74Apache-2.022021-01-17T06:21:29.000Z2021-01-17T20:19:50.000Zadvent/model/discriminator.pyChristopheGraveline064/ADVENTfc0ecd099862ed68979b2197423f1bb34df09c74Apache-2.0NoneNoneNonefrom torch import nn\n\n\ndef get_fc_discriminator(num_classes, ndf=64):\n return nn.Sequential(\n nn.Conv2d(num_classes, ndf, kernel_size=4, stride=2, padding=1),\n nn.LeakyReLU(negative_slope=0.2, inplace=True),\n nn.Conv2d(ndf, ndf * 2, kernel_size=4, stride=2, padding=1),\n nn.LeakyReLU(negative_slope=0.2, inplace=True),\n nn.Conv2d(ndf * 2, ndf * 4, kernel_size=4, stride=2, padding=1),\n nn.LeakyReLU(negative_slope=0.2, inplace=True),\n nn.Conv2d(ndf * 4, ndf * 8, kernel_size=4, stride=2, padding=1),\n nn.LeakyReLU(negative_slope=0.2, inplace=True),\n nn.Conv2d(ndf * 8, 1, kernel_size=4, stride=2, padding=1),\n )\n\n\n# def get_fe_discriminator(num_classes, ndf=64): # 256-128-64-32-16\n# return nn.Sequential(\n# nn.Conv2d(num_classes, ndf * 4, kernel_size=4, stride=2, padding=1),\n# nn.LeakyReLU(negative_slope=0.2, inplace=True),\n# nn.Conv2d(ndf * 4, ndf * 2, kernel_size=4, stride=2, padding=1),\n# nn.LeakyReLU(negative_slope=0.2, inplace=True),\n# nn.Conv2d(ndf * 2, ndf, kernel_size=2, stride=2, padding=0),\n# nn.LeakyReLU(negative_slope=0.2, inplace=True),\n# # nn.Conv2d(ndf * 4, ndf * 8, kernel_size=4, stride=2, padding=1),\n# # nn.LeakyReLU(negative_slope=0.2, inplace=True),\n# nn.Conv2d(ndf, 1, kernel_size=2, stride=2, padding=0),\n# )\n\n# def get_fe_discriminator(num_classes, ndf=64):\n# return nn.Sequential(\n# nn.Conv2d(num_classes, ndf, kernel_size=4, stride=2, padding=1),\n# nn.LeakyReLU(negative_slope=0.2, inplace=True),\n# nn.Conv2d(ndf, ndf * 2, kernel_size=4, stride=2, padding=1),\n# nn.LeakyReLU(negative_slope=0.2, inplace=True),\n# nn.Conv2d(ndf * 2, ndf * 4, kernel_size=4, stride=2, padding=1),\n# nn.LeakyReLU(negative_slope=0.2, inplace=True),\n# # nn.Conv2d(ndf * 4, ndf * 8, kernel_size=4, stride=2, padding=1),\n# # nn.LeakyReLU(negative_slope=0.2, inplace=True),\n# nn.Conv2d(ndf * 4, 1, kernel_size=1, stride=1, padding=0),\n# )\n\ndef get_fe_discriminator(num_classes, ndf=64): # H/8,H/8,(1024 -> 256 -> 128 -> 64 -> 1)\n return nn.Sequential(\n nn.Conv2d(num_classes, ndf * 4, kernel_size=1, stride=1, padding=0),\n # x=self.dropout(x)\n nn.LeakyReLU(negative_slope=0.2, inplace=True),\n nn.Conv2d(ndf * 4, ndf * 2, kernel_size=1, stride=1, padding=0),\n # x=self.dropout(x)\n nn.LeakyReLU(negative_slope=0.2, inplace=True),\n nn.Conv2d(ndf * 2, ndf, kernel_size=1, stride=1, padding=0),\n # x=self.dropout(x)\n nn.LeakyReLU(negative_slope=0.2, inplace=True),\n # nn.Conv2d(ndf * 4, ndf * 8, kernel_size=4, stride=2, padding=1),\n # nn.LeakyReLU(negative_slope=0.2, inplace=True),\n nn.Conv2d(ndf, 1, kernel_size=1, stride=1, padding=0),\n )49.396552900.624433
06aea61815f42420b447d1ce164aa7c65f5c5bc943652pyPythonspyder/dependencies.pyaglotero/spyder075d32fa359b728416de36cb0e744715fa5e3943MIT22019-04-25T08:25:37.000Z2019-04-25T08:25:43.000Zspyder/dependencies.pyaglotero/spyder075d32fa359b728416de36cb0e744715fa5e3943MIT12020-10-29T19:53:11.000Z2020-10-29T19:53:11.000Zspyder/dependencies.pyaglotero/spyder075d32fa359b728416de36cb0e744715fa5e3943MIT12019-02-18T01:28:51.000Z2019-02-18T01:28:51.000Z# -*- coding: utf-8 -*-\r\n#\r\n# Copyright © Spyder Project Contributors\r\n# Licensed under the terms of the MIT License\r\n# (see spyder/__init__.py for details)\r\n\r\n"""Module checking Spyder runtime dependencies"""\r\n\r\n\r\nimport os\r\n\r\n# Local imports\r\nfrom spyder.utils import programs\r\n\r\n\r\nclass Dependency(object):\r\n """Spyder's dependency\r\n\r\n version may starts with =, >=, > or < to specify the exact requirement ;\r\n multiple conditions may be separated by ';' (e.g. '>=0.13;<1.0')"""\r\n\r\n OK = 'OK'\r\n NOK = 'NOK'\r\n\r\n def __init__(self, modname, features, required_version,\r\n installed_version=None, optional=False):\r\n self.modname = modname\r\n self.features = features\r\n self.required_version = required_version\r\n self.optional = optional\r\n if installed_version is None:\r\n try:\r\n self.installed_version = programs.get_module_version(modname)\r\n except:\r\n # NOTE: Don't add any exception type here!\r\n # Modules can fail to import in several ways besides\r\n # ImportError\r\n self.installed_version = None\r\n else:\r\n self.installed_version = installed_version\r\n\r\n def check(self):\r\n """Check if dependency is installed"""\r\n return programs.is_module_installed(self.modname,\r\n self.required_version,\r\n self.installed_version)\r\n\r\n def get_installed_version(self):\r\n """Return dependency status (string)"""\r\n if self.check():\r\n return '%s (%s)' % (self.installed_version, self.OK)\r\n else:\r\n return '%s (%s)' % (self.installed_version, self.NOK)\r\n \r\n def get_status(self):\r\n """Return dependency status (string)"""\r\n if self.check():\r\n return self.OK\r\n else:\r\n return self.NOK\r\n\r\n\r\nDEPENDENCIES = []\r\n\r\n\r\ndef add(modname, features, required_version, installed_version=None,\r\n optional=False):\r\n """Add Spyder dependency"""\r\n global DEPENDENCIES\r\n for dependency in DEPENDENCIES:\r\n if dependency.modname == modname:\r\n raise ValueError("Dependency has already been registered: %s"\\r\n % modname)\r\n DEPENDENCIES += [Dependency(modname, features, required_version,\r\n installed_version, optional)]\r\n\r\n\r\ndef check(modname):\r\n """Check if required dependency is installed"""\r\n for dependency in DEPENDENCIES:\r\n if dependency.modname == modname:\r\n return dependency.check()\r\n else:\r\n raise RuntimeError("Unkwown dependency %s" % modname)\r\n\r\n\r\ndef status(deps=DEPENDENCIES, linesep=os.linesep):\r\n """Return a status of dependencies"""\r\n maxwidth = 0\r\n col1 = []\r\n col2 = []\r\n for dependency in deps:\r\n title1 = dependency.modname\r\n title1 += ' ' + dependency.required_version\r\n col1.append(title1)\r\n maxwidth = max([maxwidth, len(title1)])\r\n col2.append(dependency.get_installed_version())\r\n text = ""\r\n for index in range(len(deps)):\r\n text += col1[index].ljust(maxwidth) + ': ' + col2[index] + linesep\r\n return text[:-1]\r\n\r\n\r\ndef missing_dependencies():\r\n """Return the status of missing dependencies (if any)"""\r\n missing_deps = []\r\n for dependency in DEPENDENCIES:\r\n if not dependency.check() and not dependency.optional:\r\n missing_deps.append(dependency)\r\n if missing_deps:\r\n return status(deps=missing_deps, linesep='<br>')\r\n else:\r\n return ""\r\n32.035088780.585706
06aea82e968ce364fdac8932cf3b83554a12ac7972947pyPythonsetup.pyjasperhyp/Chemprop4SEc02b604b63b6766464db829fea0b306c67302e82MIT12021-12-15T05:18:07.000Z2021-12-15T05:18:07.000Zsetup.pyjasperhyp/chemprop4SEc02b604b63b6766464db829fea0b306c67302e82MITNoneNoneNonesetup.pyjasperhyp/chemprop4SEc02b604b63b6766464db829fea0b306c67302e82MITNoneNoneNoneimport os\r\nfrom setuptools import find_packages, setup\r\n\r\n# Load version number\r\n__version__ = None\r\n\r\nsrc_dir = os.path.abspath(os.path.dirname(__file__))\r\nversion_file = os.path.join(src_dir, 'chemprop', '_version.py')\r\n\r\nwith open(version_file, encoding='utf-8') as fd:\r\n exec(fd.read())\r\n\r\n# Load README\r\nwith open('README.md', encoding='utf-8') as f:\r\n long_description = f.read()\r\n\r\n\r\nsetup(\r\n name='chemprop',\r\n version=__version__,\r\n author='Kyle Swanson, Kevin Yang, Wengong Jin, Lior Hirschfeld, Allison Tam',\r\n author_email='chemprop@mit.edu',\r\n description='Molecular Property Prediction with Message Passing Neural Networks',\r\n long_description=long_description,\r\n long_description_content_type='text/markdown',\r\n url='https://github.com/chemprop/chemprop',\r\n download_url=f'https://github.com/chemprop/chemprop/v_{__version__}.tar.gz',\r\n project_urls={\r\n 'Documentation': 'https://chemprop.readthedocs.io/en/latest/',\r\n 'Source': 'https://github.com/chemprop/chemprop',\r\n 'PyPi': 'https://pypi.org/project/chemprop/',\r\n 'Demo': 'http://chemprop.csail.mit.edu/',\r\n },\r\n license='MIT',\r\n packages=find_packages(),\r\n package_data={'chemprop': ['py.typed']},\r\n entry_points={\r\n 'console_scripts': [\r\n 'chemprop_train=chemprop.train:chemprop_train',\r\n 'chemprop_predict=chemprop.train:chemprop_predict',\r\n 'chemprop_fingerprint=chemprop.train:chemprop_fingerprint',\r\n 'chemprop_hyperopt=chemprop.hyperparameter_optimization:chemprop_hyperopt',\r\n 'chemprop_interpret=chemprop.interpret:chemprop_interpret',\r\n 'chemprop_web=chemprop.web.run:chemprop_web',\r\n 'sklearn_train=chemprop.sklearn_train:sklearn_train',\r\n 'sklearn_predict=chemprop.sklearn_predict:sklearn_predict',\r\n ]\r\n },\r\n install_requires=[\r\n 'flask>=1.1.2',\r\n 'hyperopt>=0.2.3',\r\n 'matplotlib>=3.1.3',\r\n 'numpy>=1.18.1',\r\n 'pandas>=1.0.3',\r\n 'pandas-flavor>=0.2.0',\r\n 'scikit-learn>=0.22.2.post1',\r\n 'scipy>=1.4.1',\r\n 'sphinx>=3.1.2',\r\n 'tensorboardX>=2.0',\r\n 'torch>=1.5.1',\r\n 'tqdm>=4.45.0',\r\n 'typed-argument-parser>=1.6.1'\r\n ],\r\n extras_require={\r\n 'test': [\r\n 'pytest>=6.2.2',\r\n 'parameterized>=0.8.1'\r\n ]\r\n },\r\n python_requires='>=3.6',\r\n classifiers=[\r\n 'Programming Language :: Python :: 3',\r\n 'Programming Language :: Python :: 3.6',\r\n 'Programming Language :: Python :: 3.7',\r\n 'Programming Language :: Python :: 3.8',\r\n 'License :: OSI Approved :: MIT License',\r\n 'Operating System :: OS Independent'\r\n ],\r\n keywords=[\r\n 'chemistry',\r\n 'machine learning',\r\n 'property prediction',\r\n 'message passing neural network',\r\n 'graph neural network'\r\n ]\r\n)\r\n33.873563880.599932
06aec0377fc121dfeab883792414df3e21c04a7122335pyPythonmars/tensor/indexing/slice.pyHarshCasper/mars4c12c968414d666c7a10f497bc22de90376b1932Apache-2.022019-03-29T04:11:10.000Z2020-07-08T10:19:54.000Zmars/tensor/indexing/slice.pyHarshCasper/mars4c12c968414d666c7a10f497bc22de90376b1932Apache-2.0NoneNoneNonemars/tensor/indexing/slice.pyHarshCasper/mars4c12c968414d666c7a10f497bc22de90376b1932Apache-2.0NoneNoneNone# Copyright 1999-2020 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the "License");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an "AS IS" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nfrom ... import opcodes as OperandDef\nfrom ...serialize import KeyField, ListField\nfrom ..operands import TensorHasInput, TensorOperandMixin\nfrom ..array_utils import get_array_module\nfrom ..core import TensorOrder\n\n\nclass TensorSlice(TensorHasInput, TensorOperandMixin):\n _op_type_ = OperandDef.SLICE\n\n _input = KeyField('input')\n _slices = ListField('slices')\n\n def __init__(self, slices=None, dtype=None, sparse=False, **kw):\n super().__init__(_slices=slices, _dtype=dtype, _sparse=sparse, **kw)\n\n @property\n def slices(self):\n return self._slices\n\n def _set_inputs(self, inputs):\n super()._set_inputs(inputs)\n self._input = self._inputs[0]\n\n def _get_order(self, kw, i):\n order = kw.pop('order', None)\n if order is None:\n inp = self.input\n if inp is None or inp.order == TensorOrder.C_ORDER:\n return TensorOrder.C_ORDER\n\n for shape, slc in zip(inp.shape, self._slices):\n if slc is None:\n continue\n s = slc.indices(shape)\n if s[0] == 0 and s[1] == shape and s[2] == 1:\n continue\n else:\n return TensorOrder.C_ORDER\n\n return inp.order\n\n return order[i] if isinstance(order, (list, tuple)) else order\n\n @classmethod\n def execute(cls, ctx, op):\n inp = ctx[op.inputs[0].key]\n if op.input.ndim == 0 and not hasattr(inp, 'shape'):\n # scalar, but organize it into an array\n inp = get_array_module(inp).array(inp)\n x = inp[tuple(op.slices)]\n out = op.outputs[0]\n ctx[out.key] = x.astype(x.dtype, order=out.order.value, copy=False)\n33.840580760.636403
06aec42c6af54cc3a34d294f61a827b50bebc2cb650221pyPythonftplugin/python/python/pyflakes/pyflakes/checker.pyleewckk/vim.configurationdb3faa4343714dd3eb3b7ab19f8cd0b64a52ee57MITNoneNoneNoneftplugin/python/python/pyflakes/pyflakes/checker.pyleewckk/vim.configurationdb3faa4343714dd3eb3b7ab19f8cd0b64a52ee57MITNoneNoneNoneftplugin/python/python/pyflakes/pyflakes/checker.pyleewckk/vim.configurationdb3faa4343714dd3eb3b7ab19f8cd0b64a52ee57MITNoneNoneNone"""\r\nMain module.\r\n\r\nImplement the central Checker class.\r\nAlso, it models the Bindings and Scopes.\r\n"""\r\nimport __future__\r\nimport doctest\r\nimport os\r\nimport sys\r\n\r\nPY2 = sys.version_info < (3, 0)\r\nPY32 = sys.version_info < (3, 3) # Python 2.5 to 3.2\r\nPY33 = sys.version_info < (3, 4) # Python 2.5 to 3.3\r\nPY34 = sys.version_info < (3, 5) # Python 2.5 to 3.4\r\ntry:\r\n sys.pypy_version_info\r\n PYPY = True\r\nexcept AttributeError:\r\n PYPY = False\r\n\r\nbuiltin_vars = dir(__import__('__builtin__' if PY2 else 'builtins'))\r\n\r\ntry:\r\n import ast\r\nexcept ImportError: # Python 2.5\r\n import _ast as ast\r\n\r\n if 'decorator_list' not in ast.ClassDef._fields:\r\n # Patch the missing attribute 'decorator_list'\r\n ast.ClassDef.decorator_list = ()\r\n ast.FunctionDef.decorator_list = property(lambda s: s.decorators)\r\n\r\nfrom pyflakes import messages\r\n\r\n\r\nif PY2:\r\n def getNodeType(node_class):\r\n # workaround str.upper() which is locale-dependent\r\n return str(unicode(node_class.__name__).upper())\r\nelse:\r\n def getNodeType(node_class):\r\n return node_class.__name__.upper()\r\n\r\n# Python >= 3.3 uses ast.Try instead of (ast.TryExcept + ast.TryFinally)\r\nif PY32:\r\n def getAlternatives(n):\r\n if isinstance(n, (ast.If, ast.TryFinally)):\r\n return [n.body]\r\n if isinstance(n, ast.TryExcept):\r\n return [n.body + n.orelse] + [[hdl] for hdl in n.handlers]\r\nelse:\r\n def getAlternatives(n):\r\n if isinstance(n, ast.If):\r\n return [n.body]\r\n if isinstance(n, ast.Try):\r\n return [n.body + n.orelse] + [[hdl] for hdl in n.handlers]\r\n\r\nif PY34:\r\n LOOP_TYPES = (ast.While, ast.For)\r\nelse:\r\n LOOP_TYPES = (ast.While, ast.For, ast.AsyncFor)\r\n\r\n\r\nclass _FieldsOrder(dict):\r\n """Fix order of AST node fields."""\r\n\r\n def _get_fields(self, node_class):\r\n # handle iter before target, and generators before element\r\n fields = node_class._fields\r\n if 'iter' in fields:\r\n key_first = 'iter'.find\r\n elif 'generators' in fields:\r\n key_first = 'generators'.find\r\n else:\r\n key_first = 'value'.find\r\n return tuple(sorted(fields, key=key_first, reverse=True))\r\n\r\n def __missing__(self, node_class):\r\n self[node_class] = fields = self._get_fields(node_class)\r\n return fields\r\n\r\n\r\ndef counter(items):\r\n """\r\n Simplest required implementation of collections.Counter. Required as 2.6\r\n does not have Counter in collections.\r\n """\r\n results = {}\r\n for item in items:\r\n results[item] = results.get(item, 0) + 1\r\n return results\r\n\r\n\r\ndef iter_child_nodes(node, omit=None, _fields_order=_FieldsOrder()):\r\n """\r\n Yield all direct child nodes of *node*, that is, all fields that\r\n are nodes and all items of fields that are lists of nodes.\r\n """\r\n for name in _fields_order[node.__class__]:\r\n if name == omit:\r\n continue\r\n field = getattr(node, name, None)\r\n if isinstance(field, ast.AST):\r\n yield field\r\n elif isinstance(field, list):\r\n for item in field:\r\n yield item\r\n\r\n\r\ndef convert_to_value(item):\r\n if isinstance(item, ast.Str):\r\n return item.s\r\n elif hasattr(ast, 'Bytes') and isinstance(item, ast.Bytes):\r\n return item.s\r\n elif isinstance(item, ast.Tuple):\r\n return tuple(convert_to_value(i) for i in item.elts)\r\n elif isinstance(item, ast.Num):\r\n return item.n\r\n elif isinstance(item, ast.Name):\r\n result = VariableKey(item=item)\r\n constants_lookup = {\r\n 'True': True,\r\n 'False': False,\r\n 'None': None,\r\n }\r\n return constants_lookup.get(\r\n result.name,\r\n result,\r\n )\r\n elif (not PY33) and isinstance(item, ast.NameConstant):\r\n # None, True, False are nameconstants in python3, but names in 2\r\n return item.value\r\n else:\r\n return UnhandledKeyType()\r\n\r\n\r\nclass Binding(object):\r\n """\r\n Represents the binding of a value to a name.\r\n\r\n The checker uses this to keep track of which names have been bound and\r\n which names have not. See L{Assignment} for a special type of binding that\r\n is checked with stricter rules.\r\n\r\n @ivar used: pair of (L{Scope}, node) indicating the scope and\r\n the node that this binding was last used.\r\n """\r\n\r\n def __init__(self, name, source):\r\n self.name = name\r\n self.source = source\r\n self.used = False\r\n\r\n def __str__(self):\r\n return self.name\r\n\r\n def __repr__(self):\r\n return '<%s object %r from line %r at 0x%x>' % (self.__class__.__name__,\r\n self.name,\r\n self.source.lineno,\r\n id(self))\r\n\r\n def redefines(self, other):\r\n return isinstance(other, Definition) and self.name == other.name\r\n\r\n\r\nclass Definition(Binding):\r\n """\r\n A binding that defines a function or a class.\r\n """\r\n\r\n\r\nclass UnhandledKeyType(object):\r\n """\r\n A dictionary key of a type that we cannot or do not check for duplicates.\r\n """\r\n\r\n\r\nclass VariableKey(object):\r\n """\r\n A dictionary key which is a variable.\r\n\r\n @ivar item: The variable AST object.\r\n """\r\n def __init__(self, item):\r\n self.name = item.id\r\n\r\n def __eq__(self, compare):\r\n return (\r\n compare.__class__ == self.__class__\r\n and compare.name == self.name\r\n )\r\n\r\n def __hash__(self):\r\n return hash(self.name)\r\n\r\n\r\nclass Importation(Definition):\r\n """\r\n A binding created by an import statement.\r\n\r\n @ivar fullName: The complete name given to the import statement,\r\n possibly including multiple dotted components.\r\n @type fullName: C{str}\r\n """\r\n\r\n def __init__(self, name, source, full_name=None):\r\n self.fullName = full_name or name\r\n self.redefined = []\r\n super(Importation, self).__init__(name, source)\r\n\r\n def redefines(self, other):\r\n if isinstance(other, SubmoduleImportation):\r\n # See note in SubmoduleImportation about RedefinedWhileUnused\r\n return self.fullName == other.fullName\r\n return isinstance(other, Definition) and self.name == other.name\r\n\r\n def _has_alias(self):\r\n """Return whether importation needs an as clause."""\r\n return not self.fullName.split('.')[-1] == self.name\r\n\r\n @property\r\n def source_statement(self):\r\n """Generate a source statement equivalent to the import."""\r\n if self._has_alias():\r\n return 'import %s as %s' % (self.fullName, self.name)\r\n else:\r\n return 'import %s' % self.fullName\r\n\r\n def __str__(self):\r\n """Return import full name with alias."""\r\n if self._has_alias():\r\n return self.fullName + ' as ' + self.name\r\n else:\r\n return self.fullName\r\n\r\n\r\nclass SubmoduleImportation(Importation):\r\n """\r\n A binding created by a submodule import statement.\r\n\r\n A submodule import is a special case where the root module is implicitly\r\n imported, without an 'as' clause, and the submodule is also imported.\r\n Python does not restrict which attributes of the root module may be used.\r\n\r\n This class is only used when the submodule import is without an 'as' clause.\r\n\r\n pyflakes handles this case by registering the root module name in the scope,\r\n allowing any attribute of the root module to be accessed.\r\n\r\n RedefinedWhileUnused is suppressed in `redefines` unless the submodule\r\n name is also the same, to avoid false positives.\r\n """\r\n\r\n def __init__(self, name, source):\r\n # A dot should only appear in the name when it is a submodule import\r\n assert '.' in name and (not source or isinstance(source, ast.Import))\r\n package_name = name.split('.')[0]\r\n super(SubmoduleImportation, self).__init__(package_name, source)\r\n self.fullName = name\r\n\r\n def redefines(self, other):\r\n if isinstance(other, Importation):\r\n return self.fullName == other.fullName\r\n return super(SubmoduleImportation, self).redefines(other)\r\n\r\n def __str__(self):\r\n return self.fullName\r\n\r\n @property\r\n def source_statement(self):\r\n return 'import ' + self.fullName\r\n\r\n\r\nclass ImportationFrom(Importation):\r\n\r\n def __init__(self, name, source, module, real_name=None):\r\n self.module = module\r\n self.real_name = real_name or name\r\n\r\n if module.endswith('.'):\r\n full_name = module + self.real_name\r\n else:\r\n full_name = module + '.' + self.real_name\r\n\r\n super(ImportationFrom, self).__init__(name, source, full_name)\r\n\r\n def __str__(self):\r\n """Return import full name with alias."""\r\n if self.real_name != self.name:\r\n return self.fullName + ' as ' + self.name\r\n else:\r\n return self.fullName\r\n\r\n @property\r\n def source_statement(self):\r\n if self.real_name != self.name:\r\n return 'from %s import %s as %s' % (self.module,\r\n self.real_name,\r\n self.name)\r\n else:\r\n return 'from %s import %s' % (self.module, self.name)\r\n\r\n\r\nclass StarImportation(Importation):\r\n """A binding created by a 'from x import *' statement."""\r\n\r\n def __init__(self, name, source):\r\n super(StarImportation, self).__init__('*', source)\r\n # Each star importation needs a unique name, and\r\n # may not be the module name otherwise it will be deemed imported\r\n self.name = name + '.*'\r\n self.fullName = name\r\n\r\n @property\r\n def source_statement(self):\r\n return 'from ' + self.fullName + ' import *'\r\n\r\n def __str__(self):\r\n # When the module ends with a ., avoid the ambiguous '..*'\r\n if self.fullName.endswith('.'):\r\n return self.source_statement\r\n else:\r\n return self.name\r\n\r\n\r\nclass FutureImportation(ImportationFrom):\r\n """\r\n A binding created by a from `__future__` import statement.\r\n\r\n `__future__` imports are implicitly used.\r\n """\r\n\r\n def __init__(self, name, source, scope):\r\n super(FutureImportation, self).__init__(name, source, '__future__')\r\n self.used = (scope, source)\r\n\r\n\r\nclass Argument(Binding):\r\n """\r\n Represents binding a name as an argument.\r\n """\r\n\r\n\r\nclass Assignment(Binding):\r\n """\r\n Represents binding a name with an explicit assignment.\r\n\r\n The checker will raise warnings for any Assignment that isn't used. Also,\r\n the checker does not consider assignments in tuple/list unpacking to be\r\n Assignments, rather it treats them as simple Bindings.\r\n """\r\n\r\n\r\nclass FunctionDefinition(Definition):\r\n pass\r\n\r\n\r\nclass ClassDefinition(Definition):\r\n pass\r\n\r\n\r\nclass ExportBinding(Binding):\r\n """\r\n A binding created by an C{__all__} assignment. If the names in the list\r\n can be determined statically, they will be treated as names for export and\r\n additional checking applied to them.\r\n\r\n The only C{__all__} assignment that can be recognized is one which takes\r\n the value of a literal list containing literal strings. For example::\r\n\r\n __all__ = ["foo", "bar"]\r\n\r\n Names which are imported and not otherwise used but appear in the value of\r\n C{__all__} will not have an unused import warning reported for them.\r\n """\r\n\r\n def __init__(self, name, source, scope):\r\n if '__all__' in scope and isinstance(source, ast.AugAssign):\r\n self.names = list(scope['__all__'].names)\r\n else:\r\n self.names = []\r\n if isinstance(source.value, (ast.List, ast.Tuple)):\r\n for node in source.value.elts:\r\n if isinstance(node, ast.Str):\r\n self.names.append(node.s)\r\n super(ExportBinding, self).__init__(name, source)\r\n\r\n\r\nclass Scope(dict):\r\n importStarred = False # set to True when import * is found\r\n\r\n def __repr__(self):\r\n scope_cls = self.__class__.__name__\r\n return '<%s at 0x%x %s>' % (scope_cls, id(self), dict.__repr__(self))\r\n\r\n\r\nclass ClassScope(Scope):\r\n pass\r\n\r\n\r\nclass FunctionScope(Scope):\r\n """\r\n I represent a name scope for a function.\r\n\r\n @ivar globals: Names declared 'global' in this function.\r\n """\r\n usesLocals = False\r\n alwaysUsed = set(['__tracebackhide__',\r\n '__traceback_info__', '__traceback_supplement__'])\r\n\r\n def __init__(self):\r\n super(FunctionScope, self).__init__()\r\n # Simplify: manage the special locals as globals\r\n self.globals = self.alwaysUsed.copy()\r\n self.returnValue = None # First non-empty return\r\n self.isGenerator = False # Detect a generator\r\n\r\n def unusedAssignments(self):\r\n """\r\n Return a generator for the assignments which have not been used.\r\n """\r\n for name, binding in self.items():\r\n if (not binding.used and name not in self.globals\r\n and not self.usesLocals\r\n and isinstance(binding, Assignment)):\r\n yield name, binding\r\n\r\n\r\nclass GeneratorScope(Scope):\r\n pass\r\n\r\n\r\nclass ModuleScope(Scope):\r\n """Scope for a module."""\r\n _futures_allowed = True\r\n\r\n\r\nclass DoctestScope(ModuleScope):\r\n """Scope for a doctest."""\r\n\r\n\r\n# Globally defined names which are not attributes of the builtins module, or\r\n# are only present on some platforms.\r\n_MAGIC_GLOBALS = ['__file__', '__builtins__', 'WindowsError']\r\n\r\n\r\ndef getNodeName(node):\r\n # Returns node.id, or node.name, or None\r\n if hasattr(node, 'id'): # One of the many nodes with an id\r\n return node.id\r\n if hasattr(node, 'name'): # an ExceptHandler node\r\n return node.name\r\n\r\n\r\nclass Checker(object):\r\n """\r\n I check the cleanliness and sanity of Python code.\r\n\r\n @ivar _deferredFunctions: Tracking list used by L{deferFunction}. Elements\r\n of the list are two-tuples. The first element is the callable passed\r\n to L{deferFunction}. The second element is a copy of the scope stack\r\n at the time L{deferFunction} was called.\r\n\r\n @ivar _deferredAssignments: Similar to C{_deferredFunctions}, but for\r\n callables which are deferred assignment checks.\r\n """\r\n\r\n nodeDepth = 0\r\n offset = None\r\n traceTree = False\r\n\r\n builtIns = set(builtin_vars).union(_MAGIC_GLOBALS)\r\n _customBuiltIns = os.environ.get('PYFLAKES_BUILTINS')\r\n if _customBuiltIns:\r\n builtIns.update(_customBuiltIns.split(','))\r\n del _customBuiltIns\r\n\r\n def __init__(self, tree, filename='(none)', builtins=None,\r\n withDoctest='PYFLAKES_DOCTEST' in os.environ):\r\n self._nodeHandlers = {}\r\n self._deferredFunctions = []\r\n self._deferredAssignments = []\r\n self.deadScopes = []\r\n self.messages = []\r\n self.filename = filename\r\n if builtins:\r\n self.builtIns = self.builtIns.union(builtins)\r\n self.withDoctest = withDoctest\r\n self.scopeStack = [ModuleScope()]\r\n self.exceptHandlers = [()]\r\n self.root = tree\r\n self.handleChildren(tree)\r\n self.runDeferred(self._deferredFunctions)\r\n # Set _deferredFunctions to None so that deferFunction will fail\r\n # noisily if called after we've run through the deferred functions.\r\n self._deferredFunctions = None\r\n self.runDeferred(self._deferredAssignments)\r\n # Set _deferredAssignments to None so that deferAssignment will fail\r\n # noisily if called after we've run through the deferred assignments.\r\n self._deferredAssignments = None\r\n del self.scopeStack[1:]\r\n self.popScope()\r\n self.checkDeadScopes()\r\n\r\n def deferFunction(self, callable):\r\n """\r\n Schedule a function handler to be called just before completion.\r\n\r\n This is used for handling function bodies, which must be deferred\r\n because code later in the file might modify the global scope. When\r\n `callable` is called, the scope at the time this is called will be\r\n restored, however it will contain any new bindings added to it.\r\n """\r\n self._deferredFunctions.append((callable, self.scopeStack[:], self.offset))\r\n\r\n def deferAssignment(self, callable):\r\n """\r\n Schedule an assignment handler to be called just after deferred\r\n function handlers.\r\n """\r\n self._deferredAssignments.append((callable, self.scopeStack[:], self.offset))\r\n\r\n def runDeferred(self, deferred):\r\n """\r\n Run the callables in C{deferred} using their associated scope stack.\r\n """\r\n for handler, scope, offset in deferred:\r\n self.scopeStack = scope\r\n self.offset = offset\r\n handler()\r\n\r\n def _in_doctest(self):\r\n return (len(self.scopeStack) >= 2 and\r\n isinstance(self.scopeStack[1], DoctestScope))\r\n\r\n @property\r\n def futuresAllowed(self):\r\n if not all(isinstance(scope, ModuleScope)\r\n for scope in self.scopeStack):\r\n return False\r\n\r\n return self.scope._futures_allowed\r\n\r\n @futuresAllowed.setter\r\n def futuresAllowed(self, value):\r\n assert value is False\r\n if isinstance(self.scope, ModuleScope):\r\n self.scope._futures_allowed = False\r\n\r\n @property\r\n def scope(self):\r\n return self.scopeStack[-1]\r\n\r\n def popScope(self):\r\n self.deadScopes.append(self.scopeStack.pop())\r\n\r\n def checkDeadScopes(self):\r\n """\r\n Look at scopes which have been fully examined and report names in them\r\n which were imported but unused.\r\n """\r\n for scope in self.deadScopes:\r\n # imports in classes are public members\r\n if isinstance(scope, ClassScope):\r\n continue\r\n\r\n all_binding = scope.get('__all__')\r\n if all_binding and not isinstance(all_binding, ExportBinding):\r\n all_binding = None\r\n\r\n if all_binding:\r\n all_names = set(all_binding.names)\r\n undefined = all_names.difference(scope)\r\n else:\r\n all_names = undefined = []\r\n\r\n if undefined:\r\n if not scope.importStarred and \\r\n os.path.basename(self.filename) != '__init__.py':\r\n # Look for possible mistakes in the export list\r\n for name in undefined:\r\n self.report(messages.UndefinedExport,\r\n scope['__all__'].source, name)\r\n\r\n # mark all import '*' as used by the undefined in __all__\r\n if scope.importStarred:\r\n for binding in scope.values():\r\n if isinstance(binding, StarImportation):\r\n binding.used = all_binding\r\n\r\n # Look for imported names that aren't used.\r\n for value in scope.values():\r\n if isinstance(value, Importation):\r\n used = value.used or value.name in all_names\r\n if not used:\r\n messg = messages.UnusedImport\r\n self.report(messg, value.source, str(value))\r\n for node in value.redefined:\r\n if isinstance(self.getParent(node), ast.For):\r\n messg = messages.ImportShadowedByLoopVar\r\n elif used:\r\n continue\r\n else:\r\n messg = messages.RedefinedWhileUnused\r\n self.report(messg, node, value.name, value.source)\r\n\r\n def pushScope(self, scopeClass=FunctionScope):\r\n self.scopeStack.append(scopeClass())\r\n\r\n def report(self, messageClass, *args, **kwargs):\r\n self.messages.append(messageClass(self.filename, *args, **kwargs))\r\n\r\n def getParent(self, node):\r\n # Lookup the first parent which is not Tuple, List or Starred\r\n while True:\r\n node = node.parent\r\n if not hasattr(node, 'elts') and not hasattr(node, 'ctx'):\r\n return node\r\n\r\n def getCommonAncestor(self, lnode, rnode, stop):\r\n if stop in (lnode, rnode) or not (hasattr(lnode, 'parent') and\r\n hasattr(rnode, 'parent')):\r\n return None\r\n if lnode is rnode:\r\n return lnode\r\n\r\n if (lnode.depth > rnode.depth):\r\n return self.getCommonAncestor(lnode.parent, rnode, stop)\r\n if (lnode.depth < rnode.depth):\r\n return self.getCommonAncestor(lnode, rnode.parent, stop)\r\n return self.getCommonAncestor(lnode.parent, rnode.parent, stop)\r\n\r\n def descendantOf(self, node, ancestors, stop):\r\n for a in ancestors:\r\n if self.getCommonAncestor(node, a, stop):\r\n return True\r\n return False\r\n\r\n def differentForks(self, lnode, rnode):\r\n """True, if lnode and rnode are located on different forks of IF/TRY"""\r\n ancestor = self.getCommonAncestor(lnode, rnode, self.root)\r\n parts = getAlternatives(ancestor)\r\n if parts:\r\n for items in parts:\r\n if self.descendantOf(lnode, items, ancestor) ^ \\r\n self.descendantOf(rnode, items, ancestor):\r\n return True\r\n return False\r\n\r\n def addBinding(self, node, value):\r\n """\r\n Called when a binding is altered.\r\n\r\n - `node` is the statement responsible for the change\r\n - `value` is the new value, a Binding instance\r\n """\r\n # assert value.source in (node, node.parent):\r\n for scope in self.scopeStack[::-1]:\r\n if value.name in scope:\r\n break\r\n existing = scope.get(value.name)\r\n\r\n if existing and not self.differentForks(node, existing.source):\r\n\r\n parent_stmt = self.getParent(value.source)\r\n if isinstance(existing, Importation) and isinstance(parent_stmt, ast.For):\r\n self.report(messages.ImportShadowedByLoopVar,\r\n node, value.name, existing.source)\r\n\r\n elif scope is self.scope:\r\n if (isinstance(parent_stmt, ast.comprehension) and\r\n not isinstance(self.getParent(existing.source),\r\n (ast.For, ast.comprehension))):\r\n self.report(messages.RedefinedInListComp,\r\n node, value.name, existing.source)\r\n elif not existing.used and value.redefines(existing):\r\n self.report(messages.RedefinedWhileUnused,\r\n node, value.name, existing.source)\r\n\r\n elif isinstance(existing, Importation) and value.redefines(existing):\r\n existing.redefined.append(node)\r\n\r\n if value.name in self.scope:\r\n # then assume the rebound name is used as a global or within a loop\r\n value.used = self.scope[value.name].used\r\n\r\n self.scope[value.name] = value\r\n\r\n def getNodeHandler(self, node_class):\r\n try:\r\n return self._nodeHandlers[node_class]\r\n except KeyError:\r\n nodeType = getNodeType(node_class)\r\n self._nodeHandlers[node_class] = handler = getattr(self, nodeType)\r\n return handler\r\n\r\n def handleNodeLoad(self, node):\r\n name = getNodeName(node)\r\n if not name:\r\n return\r\n\r\n in_generators = None\r\n importStarred = None\r\n\r\n # try enclosing function scopes and global scope\r\n for scope in self.scopeStack[-1::-1]:\r\n # only generators used in a class scope can access the names\r\n # of the class. this is skipped during the first iteration\r\n if in_generators is False and isinstance(scope, ClassScope):\r\n continue\r\n\r\n try:\r\n scope[name].used = (self.scope, node)\r\n except KeyError:\r\n pass\r\n else:\r\n return\r\n\r\n importStarred = importStarred or scope.importStarred\r\n\r\n if in_generators is not False:\r\n in_generators = isinstance(scope, GeneratorScope)\r\n\r\n # look in the built-ins\r\n if name in self.builtIns:\r\n return\r\n\r\n if importStarred:\r\n from_list = []\r\n\r\n for scope in self.scopeStack[-1::-1]:\r\n for binding in scope.values():\r\n if isinstance(binding, StarImportation):\r\n # mark '*' imports as used for each scope\r\n binding.used = (self.scope, node)\r\n from_list.append(binding.fullName)\r\n\r\n # report * usage, with a list of possible sources\r\n from_list = ', '.join(sorted(from_list))\r\n self.report(messages.ImportStarUsage, node, name, from_list)\r\n return\r\n\r\n if name == '__path__' and os.path.basename(self.filename) == '__init__.py':\r\n # the special name __path__ is valid only in packages\r\n return\r\n\r\n # protected with a NameError handler?\r\n if 'NameError' not in self.exceptHandlers[-1]:\r\n self.report(messages.UndefinedName, node, name)\r\n\r\n def handleNodeStore(self, node):\r\n name = getNodeName(node)\r\n if not name:\r\n return\r\n # if the name hasn't already been defined in the current scope\r\n if isinstance(self.scope, FunctionScope) and name not in self.scope:\r\n # for each function or module scope above us\r\n for scope in self.scopeStack[:-1]:\r\n if not isinstance(scope, (FunctionScope, ModuleScope)):\r\n continue\r\n # if the name was defined in that scope, and the name has\r\n # been accessed already in the current scope, and hasn't\r\n # been declared global\r\n used = name in scope and scope[name].used\r\n if used and used[0] is self.scope and name not in self.scope.globals:\r\n # then it's probably a mistake\r\n self.report(messages.UndefinedLocal,\r\n scope[name].used[1], name, scope[name].source)\r\n break\r\n\r\n parent_stmt = self.getParent(node)\r\n if isinstance(parent_stmt, (ast.For, ast.comprehension)) or (\r\n parent_stmt != node.parent and\r\n not self.isLiteralTupleUnpacking(parent_stmt)):\r\n binding = Binding(name, node)\r\n elif name == '__all__' and isinstance(self.scope, ModuleScope):\r\n binding = ExportBinding(name, node.parent, self.scope)\r\n else:\r\n binding = Assignment(name, node)\r\n self.addBinding(node, binding)\r\n\r\n def handleNodeDelete(self, node):\r\n\r\n def on_conditional_branch():\r\n """\r\n Return `True` if node is part of a conditional body.\r\n """\r\n current = getattr(node, 'parent', None)\r\n while current:\r\n if isinstance(current, (ast.If, ast.While, ast.IfExp)):\r\n return True\r\n current = getattr(current, 'parent', None)\r\n return False\r\n\r\n name = getNodeName(node)\r\n if not name:\r\n return\r\n\r\n if on_conditional_branch():\r\n # We cannot predict if this conditional branch is going to\r\n # be executed.\r\n return\r\n\r\n if isinstance(self.scope, FunctionScope) and name in self.scope.globals:\r\n self.scope.globals.remove(name)\r\n else:\r\n try:\r\n del self.scope[name]\r\n except KeyError:\r\n self.report(messages.UndefinedName, node, name)\r\n\r\n def handleChildren(self, tree, omit=None):\r\n for node in iter_child_nodes(tree, omit=omit):\r\n self.handleNode(node, tree)\r\n\r\n def isLiteralTupleUnpacking(self, node):\r\n if isinstance(node, ast.Assign):\r\n for child in node.targets + [node.value]:\r\n if not hasattr(child, 'elts'):\r\n return False\r\n return True\r\n\r\n def isDocstring(self, node):\r\n """\r\n Determine if the given node is a docstring, as long as it is at the\r\n correct place in the node tree.\r\n """\r\n return isinstance(node, ast.Str) or (isinstance(node, ast.Expr) and\r\n isinstance(node.value, ast.Str))\r\n\r\n def getDocstring(self, node):\r\n if isinstance(node, ast.Expr):\r\n node = node.value\r\n if not isinstance(node, ast.Str):\r\n return (None, None)\r\n\r\n if PYPY:\r\n doctest_lineno = node.lineno - 1\r\n else:\r\n # Computed incorrectly if the docstring has backslash\r\n doctest_lineno = node.lineno - node.s.count('\n') - 1\r\n\r\n return (node.s, doctest_lineno)\r\n\r\n def handleNode(self, node, parent):\r\n if node is None:\r\n return\r\n if self.offset and getattr(node, 'lineno', None) is not None:\r\n node.lineno += self.offset[0]\r\n node.col_offset += self.offset[1]\r\n if self.traceTree:\r\n print(' ' * self.nodeDepth + node.__class__.__name__)\r\n if self.futuresAllowed and not (isinstance(node, ast.ImportFrom) or\r\n self.isDocstring(node)):\r\n self.futuresAllowed = False\r\n self.nodeDepth += 1\r\n node.depth = self.nodeDepth\r\n node.parent = parent\r\n try:\r\n handler = self.getNodeHandler(node.__class__)\r\n handler(node)\r\n finally:\r\n self.nodeDepth -= 1\r\n if self.traceTree:\r\n print(' ' * self.nodeDepth + 'end ' + node.__class__.__name__)\r\n\r\n _getDoctestExamples = doctest.DocTestParser().get_examples\r\n\r\n def handleDoctests(self, node):\r\n try:\r\n if hasattr(node, 'docstring'):\r\n docstring = node.docstring\r\n\r\n # This is just a reasonable guess. In Python 3.7, docstrings no\r\n # longer have line numbers associated with them. This will be\r\n # incorrect if there are empty lines between the beginning\r\n # of the function and the docstring.\r\n node_lineno = node.lineno\r\n if hasattr(node, 'args'):\r\n node_lineno = max([node_lineno] +\r\n [arg.lineno for arg in node.args.args])\r\n else:\r\n (docstring, node_lineno) = self.getDocstring(node.body[0])\r\n examples = docstring and self._getDoctestExamples(docstring)\r\n except (ValueError, IndexError):\r\n # e.g. line 6 of the docstring for <string> has inconsistent\r\n # leading whitespace: ...\r\n return\r\n if not examples:\r\n return\r\n\r\n # Place doctest in module scope\r\n saved_stack = self.scopeStack\r\n self.scopeStack = [self.scopeStack[0]]\r\n node_offset = self.offset or (0, 0)\r\n self.pushScope(DoctestScope)\r\n underscore_in_builtins = '_' in self.builtIns\r\n if not underscore_in_builtins:\r\n self.builtIns.add('_')\r\n for example in examples:\r\n try:\r\n tree = compile(example.source, "<doctest>", "exec", ast.PyCF_ONLY_AST)\r\n except SyntaxError:\r\n e = sys.exc_info()[1]\r\n if PYPY:\r\n e.offset += 1\r\n position = (node_lineno + example.lineno + e.lineno,\r\n example.indent + 4 + (e.offset or 0))\r\n self.report(messages.DoctestSyntaxError, node, position)\r\n else:\r\n self.offset = (node_offset[0] + node_lineno + example.lineno,\r\n node_offset[1] + example.indent + 4)\r\n self.handleChildren(tree)\r\n self.offset = node_offset\r\n if not underscore_in_builtins:\r\n self.builtIns.remove('_')\r\n self.popScope()\r\n self.scopeStack = saved_stack\r\n\r\n def ignore(self, node):\r\n pass\r\n\r\n # "stmt" type nodes\r\n DELETE = PRINT = FOR = ASYNCFOR = WHILE = IF = WITH = WITHITEM = \\r\n ASYNCWITH = ASYNCWITHITEM = RAISE = TRYFINALLY = EXEC = \\r\n EXPR = ASSIGN = handleChildren\r\n\r\n PASS = ignore\r\n\r\n # "expr" type nodes\r\n BOOLOP = BINOP = UNARYOP = IFEXP = SET = \\r\n COMPARE = CALL = REPR = ATTRIBUTE = SUBSCRIPT = \\r\n STARRED = NAMECONSTANT = handleChildren\r\n\r\n NUM = STR = BYTES = ELLIPSIS = ignore\r\n\r\n # "slice" type nodes\r\n SLICE = EXTSLICE = INDEX = handleChildren\r\n\r\n # expression contexts are node instances too, though being constants\r\n LOAD = STORE = DEL = AUGLOAD = AUGSTORE = PARAM = ignore\r\n\r\n # same for operators\r\n AND = OR = ADD = SUB = MULT = DIV = MOD = POW = LSHIFT = RSHIFT = \\r\n BITOR = BITXOR = BITAND = FLOORDIV = INVERT = NOT = UADD = USUB = \\r\n EQ = NOTEQ = LT = LTE = GT = GTE = IS = ISNOT = IN = NOTIN = \\r\n MATMULT = ignore\r\n\r\n # additional node types\r\n COMPREHENSION = KEYWORD = FORMATTEDVALUE = JOINEDSTR = handleChildren\r\n\r\n def DICT(self, node):\r\n # Complain if there are duplicate keys with different values\r\n # If they have the same value it's not going to cause potentially\r\n # unexpected behaviour so we'll not complain.\r\n keys = [\r\n convert_to_value(key) for key in node.keys\r\n ]\r\n\r\n key_counts = counter(keys)\r\n duplicate_keys = [\r\n key for key, count in key_counts.items()\r\n if count > 1\r\n ]\r\n\r\n for key in duplicate_keys:\r\n key_indices = [i for i, i_key in enumerate(keys) if i_key == key]\r\n\r\n values = counter(\r\n convert_to_value(node.values[index])\r\n for index in key_indices\r\n )\r\n if any(count == 1 for value, count in values.items()):\r\n for key_index in key_indices:\r\n key_node = node.keys[key_index]\r\n if isinstance(key, VariableKey):\r\n self.report(messages.MultiValueRepeatedKeyVariable,\r\n key_node,\r\n key.name)\r\n else:\r\n self.report(\r\n messages.MultiValueRepeatedKeyLiteral,\r\n key_node,\r\n key,\r\n )\r\n self.handleChildren(node)\r\n\r\n def ASSERT(self, node):\r\n if isinstance(node.test, ast.Tuple) and node.test.elts != []:\r\n self.report(messages.AssertTuple, node)\r\n self.handleChildren(node)\r\n\r\n def GLOBAL(self, node):\r\n """\r\n Keep track of globals declarations.\r\n """\r\n global_scope_index = 1 if self._in_doctest() else 0\r\n global_scope = self.scopeStack[global_scope_index]\r\n\r\n # Ignore 'global' statement in global scope.\r\n if self.scope is not global_scope:\r\n\r\n # One 'global' statement can bind multiple (comma-delimited) names.\r\n for node_name in node.names:\r\n node_value = Assignment(node_name, node)\r\n\r\n # Remove UndefinedName messages already reported for this name.\r\n # TODO: if the global is not used in this scope, it does not\r\n # become a globally defined name. See test_unused_global.\r\n self.messages = [\r\n m for m in self.messages if not\r\n isinstance(m, messages.UndefinedName) or\r\n m.message_args[0] != node_name]\r\n\r\n # Bind name to global scope if it doesn't exist already.\r\n global_scope.setdefault(node_name, node_value)\r\n\r\n # Bind name to non-global scopes, but as already "used".\r\n node_value.used = (global_scope, node)\r\n for scope in self.scopeStack[global_scope_index + 1:]:\r\n scope[node_name] = node_value\r\n\r\n NONLOCAL = GLOBAL\r\n\r\n def GENERATOREXP(self, node):\r\n self.pushScope(GeneratorScope)\r\n self.handleChildren(node)\r\n self.popScope()\r\n\r\n LISTCOMP = handleChildren if PY2 else GENERATOREXP\r\n\r\n DICTCOMP = SETCOMP = GENERATOREXP\r\n\r\n def NAME(self, node):\r\n """\r\n Handle occurrence of Name (which can be a load/store/delete access.)\r\n """\r\n # Locate the name in locals / function / globals scopes.\r\n if isinstance(node.ctx, (ast.Load, ast.AugLoad)):\r\n self.handleNodeLoad(node)\r\n if (node.id == 'locals' and isinstance(self.scope, FunctionScope)\r\n and isinstance(node.parent, ast.Call)):\r\n # we are doing locals() call in current scope\r\n self.scope.usesLocals = True\r\n elif isinstance(node.ctx, (ast.Store, ast.AugStore)):\r\n self.handleNodeStore(node)\r\n elif isinstance(node.ctx, ast.Del):\r\n self.handleNodeDelete(node)\r\n else:\r\n # must be a Param context -- this only happens for names in function\r\n # arguments, but these aren't dispatched through here\r\n raise RuntimeError("Got impossible expression context: %r" % (node.ctx,))\r\n\r\n def CONTINUE(self, node):\r\n # Walk the tree up until we see a loop (OK), a function or class\r\n # definition (not OK), for 'continue', a finally block (not OK), or\r\n # the top module scope (not OK)\r\n n = node\r\n while hasattr(n, 'parent'):\r\n n, n_child = n.parent, n\r\n if isinstance(n, LOOP_TYPES):\r\n # Doesn't apply unless it's in the loop itself\r\n if n_child not in n.orelse:\r\n return\r\n if isinstance(n, (ast.FunctionDef, ast.ClassDef)):\r\n break\r\n # Handle Try/TryFinally difference in Python < and >= 3.3\r\n if hasattr(n, 'finalbody') and isinstance(node, ast.Continue):\r\n if n_child in n.finalbody:\r\n self.report(messages.ContinueInFinally, node)\r\n return\r\n if isinstance(node, ast.Continue):\r\n self.report(messages.ContinueOutsideLoop, node)\r\n else: # ast.Break\r\n self.report(messages.BreakOutsideLoop, node)\r\n\r\n BREAK = CONTINUE\r\n\r\n def RETURN(self, node):\r\n if isinstance(self.scope, (ClassScope, ModuleScope)):\r\n self.report(messages.ReturnOutsideFunction, node)\r\n return\r\n\r\n if (\r\n node.value and\r\n hasattr(self.scope, 'returnValue') and\r\n not self.scope.returnValue\r\n ):\r\n self.scope.returnValue = node.value\r\n self.handleNode(node.value, node)\r\n\r\n def YIELD(self, node):\r\n if isinstance(self.scope, (ClassScope, ModuleScope)):\r\n self.report(messages.YieldOutsideFunction, node)\r\n return\r\n\r\n self.scope.isGenerator = True\r\n self.handleNode(node.value, node)\r\n\r\n AWAIT = YIELDFROM = YIELD\r\n\r\n def FUNCTIONDEF(self, node):\r\n for deco in node.decorator_list:\r\n self.handleNode(deco, node)\r\n self.LAMBDA(node)\r\n self.addBinding(node, FunctionDefinition(node.name, node))\r\n # doctest does not process doctest within a doctest,\r\n # or in nested functions.\r\n if (self.withDoctest and\r\n not self._in_doctest() and\r\n not isinstance(self.scope, FunctionScope)):\r\n self.deferFunction(lambda: self.handleDoctests(node))\r\n\r\n ASYNCFUNCTIONDEF = FUNCTIONDEF\r\n\r\n def LAMBDA(self, node):\r\n args = []\r\n annotations = []\r\n\r\n if PY2:\r\n def addArgs(arglist):\r\n for arg in arglist:\r\n if isinstance(arg, ast.Tuple):\r\n addArgs(arg.elts)\r\n else:\r\n args.append(arg.id)\r\n addArgs(node.args.args)\r\n defaults = node.args.defaults\r\n else:\r\n for arg in node.args.args + node.args.kwonlyargs:\r\n args.append(arg.arg)\r\n annotations.append(arg.annotation)\r\n defaults = node.args.defaults + node.args.kw_defaults\r\n\r\n # Only for Python3 FunctionDefs\r\n is_py3_func = hasattr(node, 'returns')\r\n\r\n for arg_name in ('vararg', 'kwarg'):\r\n wildcard = getattr(node.args, arg_name)\r\n if not wildcard:\r\n continue\r\n args.append(wildcard if PY33 else wildcard.arg)\r\n if is_py3_func:\r\n if PY33: # Python 2.5 to 3.3\r\n argannotation = arg_name + 'annotation'\r\n annotations.append(getattr(node.args, argannotation))\r\n else: # Python >= 3.4\r\n annotations.append(wildcard.annotation)\r\n\r\n if is_py3_func:\r\n annotations.append(node.returns)\r\n\r\n if len(set(args)) < len(args):\r\n for (idx, arg) in enumerate(args):\r\n if arg in args[:idx]:\r\n self.report(messages.DuplicateArgument, node, arg)\r\n\r\n for child in annotations + defaults:\r\n if child:\r\n self.handleNode(child, node)\r\n\r\n def runFunction():\r\n\r\n self.pushScope()\r\n for name in args:\r\n self.addBinding(node, Argument(name, node))\r\n if isinstance(node.body, list):\r\n # case for FunctionDefs\r\n for stmt in node.body:\r\n self.handleNode(stmt, node)\r\n else:\r\n # case for Lambdas\r\n self.handleNode(node.body, node)\r\n\r\n def checkUnusedAssignments():\r\n """\r\n Check to see if any assignments have not been used.\r\n """\r\n for name, binding in self.scope.unusedAssignments():\r\n self.report(messages.UnusedVariable, binding.source, name)\r\n self.deferAssignment(checkUnusedAssignments)\r\n\r\n if PY32:\r\n def checkReturnWithArgumentInsideGenerator():\r\n """\r\n Check to see if there is any return statement with\r\n arguments but the function is a generator.\r\n """\r\n if self.scope.isGenerator and self.scope.returnValue:\r\n self.report(messages.ReturnWithArgsInsideGenerator,\r\n self.scope.returnValue)\r\n self.deferAssignment(checkReturnWithArgumentInsideGenerator)\r\n self.popScope()\r\n\r\n self.deferFunction(runFunction)\r\n\r\n def CLASSDEF(self, node):\r\n """\r\n Check names used in a class definition, including its decorators, base\r\n classes, and the body of its definition. Additionally, add its name to\r\n the current scope.\r\n """\r\n for deco in node.decorator_list:\r\n self.handleNode(deco, node)\r\n for baseNode in node.bases:\r\n self.handleNode(baseNode, node)\r\n if not PY2:\r\n for keywordNode in node.keywords:\r\n self.handleNode(keywordNode, node)\r\n self.pushScope(ClassScope)\r\n # doctest does not process doctest within a doctest\r\n # classes within classes are processed.\r\n if (self.withDoctest and\r\n not self._in_doctest() and\r\n not isinstance(self.scope, FunctionScope)):\r\n self.deferFunction(lambda: self.handleDoctests(node))\r\n for stmt in node.body:\r\n self.handleNode(stmt, node)\r\n self.popScope()\r\n self.addBinding(node, ClassDefinition(node.name, node))\r\n\r\n def AUGASSIGN(self, node):\r\n self.handleNodeLoad(node.target)\r\n self.handleNode(node.value, node)\r\n self.handleNode(node.target, node)\r\n\r\n def TUPLE(self, node):\r\n if not PY2 and isinstance(node.ctx, ast.Store):\r\n # Python 3 advanced tuple unpacking: a, *b, c = d.\r\n # Only one starred expression is allowed, and no more than 1<<8\r\n # assignments are allowed before a stared expression. There is\r\n # also a limit of 1<<24 expressions after the starred expression,\r\n # which is impossible to test due to memory restrictions, but we\r\n # add it here anyway\r\n has_starred = False\r\n star_loc = -1\r\n for i, n in enumerate(node.elts):\r\n if isinstance(n, ast.Starred):\r\n if has_starred:\r\n self.report(messages.TwoStarredExpressions, node)\r\n # The SyntaxError doesn't distinguish two from more\r\n # than two.\r\n break\r\n has_starred = True\r\n star_loc = i\r\n if star_loc >= 1 << 8 or len(node.elts) - star_loc - 1 >= 1 << 24:\r\n self.report(messages.TooManyExpressionsInStarredAssignment, node)\r\n self.handleChildren(node)\r\n\r\n LIST = TUPLE\r\n\r\n def IMPORT(self, node):\r\n for alias in node.names:\r\n if '.' in alias.name and not alias.asname:\r\n importation = SubmoduleImportation(alias.name, node)\r\n else:\r\n name = alias.asname or alias.name\r\n importation = Importation(name, node, alias.name)\r\n self.addBinding(node, importation)\r\n\r\n def IMPORTFROM(self, node):\r\n if node.module == '__future__':\r\n if not self.futuresAllowed:\r\n self.report(messages.LateFutureImport,\r\n node, [n.name for n in node.names])\r\n else:\r\n self.futuresAllowed = False\r\n\r\n module = ('.' * node.level) + (node.module or '')\r\n\r\n for alias in node.names:\r\n name = alias.asname or alias.name\r\n if node.module == '__future__':\r\n importation = FutureImportation(name, node, self.scope)\r\n if alias.name not in __future__.all_feature_names:\r\n self.report(messages.FutureFeatureNotDefined,\r\n node, alias.name)\r\n elif alias.name == '*':\r\n # Only Python 2, local import * is a SyntaxWarning\r\n if not PY2 and not isinstance(self.scope, ModuleScope):\r\n self.report(messages.ImportStarNotPermitted,\r\n node, module)\r\n continue\r\n\r\n self.scope.importStarred = True\r\n self.report(messages.ImportStarUsed, node, module)\r\n importation = StarImportation(module, node)\r\n else:\r\n importation = ImportationFrom(name, node,\r\n module, alias.name)\r\n self.addBinding(node, importation)\r\n\r\n def TRY(self, node):\r\n handler_names = []\r\n # List the exception handlers\r\n for i, handler in enumerate(node.handlers):\r\n if isinstance(handler.type, ast.Tuple):\r\n for exc_type in handler.type.elts:\r\n handler_names.append(getNodeName(exc_type))\r\n elif handler.type:\r\n handler_names.append(getNodeName(handler.type))\r\n\r\n if handler.type is None and i < len(node.handlers) - 1:\r\n self.report(messages.DefaultExceptNotLast, handler)\r\n # Memorize the except handlers and process the body\r\n self.exceptHandlers.append(handler_names)\r\n for child in node.body:\r\n self.handleNode(child, node)\r\n self.exceptHandlers.pop()\r\n # Process the other nodes: "except:", "else:", "finally:"\r\n self.handleChildren(node, omit='body')\r\n\r\n TRYEXCEPT = TRY\r\n\r\n def EXCEPTHANDLER(self, node):\r\n if PY2 or node.name is None:\r\n self.handleChildren(node)\r\n return\r\n\r\n # 3.x: the name of the exception, which is not a Name node, but\r\n # a simple string, creates a local that is only bound within the scope\r\n # of the except: block.\r\n\r\n for scope in self.scopeStack[::-1]:\r\n if node.name in scope:\r\n is_name_previously_defined = True\r\n break\r\n else:\r\n is_name_previously_defined = False\r\n\r\n self.handleNodeStore(node)\r\n self.handleChildren(node)\r\n if not is_name_previously_defined:\r\n # See discussion on https://github.com/PyCQA/pyflakes/pull/59\r\n\r\n # We're removing the local name since it's being unbound\r\n # after leaving the except: block and it's always unbound\r\n # if the except: block is never entered. This will cause an\r\n # "undefined name" error raised if the checked code tries to\r\n # use the name afterwards.\r\n #\r\n # Unless it's been removed already. Then do nothing.\r\n\r\n try:\r\n del self.scope[node.name]\r\n except KeyError:\r\n pass\r\n\r\n def ANNASSIGN(self, node):\r\n if node.value:\r\n # Only bind the *targets* if the assignment has a value.\r\n # Otherwise it's not really ast.Store and shouldn't silence\r\n # UndefinedLocal warnings.\r\n self.handleNode(node.target, node)\r\n self.handleNode(node.annotation, node)\r\n if node.value:\r\n # If the assignment has value, handle the *value* now.\r\n self.handleNode(node.value, node)\r\n36.900073870.567054
06aed26d63f42531533566c9bcedcbe6f5289c5e43349pyPythonAutoScreenShot.pyinfinyte7/Auto-Screenshot5d8e39af61f3361f372ffb48add53171b7cea672MIT32020-10-29T13:57:15.000Z2021-02-19T21:59:15.000ZAutoScreenShot.pyinfinyte7/Auto-Screenshot5d8e39af61f3361f372ffb48add53171b7cea672MITNoneNoneNoneAutoScreenShot.pyinfinyte7/Auto-Screenshot5d8e39af61f3361f372ffb48add53171b7cea672MIT12021-02-19T21:59:48.000Z2021-02-19T21:59:48.000Z# Project Name: Auto Screenshot\n# Description: Take screenshot of screen when any change take place.\n# Author: Mani (Infinyte7)\n# Date: 26-10-2020\n# License: MIT\n\nfrom pyscreenshot import grab\nfrom PIL import ImageChops\n\nimport os\nimport time\nimport subprocess, sys\nfrom datetime import datetime\n\nimport tkinter as tk\nfrom tkinter import *\nfrom tkinter import font\n\n\nclass AutoScreenshot:\n def __init__(self, master):\n self.root = root\n \n root.title('Auto Screenshot')\n root.config(bg="white")\n\n fontRoboto = font.Font(family='Roboto', size=16, weight='bold')\n\n # project name label \n projectTitleLabel = Label(root, text="Auto Screenshot v1.0.0")\n projectTitleLabel.config(font=fontRoboto, bg="white", fg="#5599ff")\n projectTitleLabel.pack(padx="10")\n\n # start button\n btn_start = Button(root, text="Start", command=self.start)\n btn_start.config(highlightthickness=0, bd=0, fg="white", bg="#5fd38d",\n activebackground="#5fd38d", activeforeground="white", font=fontRoboto)\n btn_start.pack(padx="10", fill=BOTH)\n\n # close button\n btn_start = Button(root, text="Close", command=self.close)\n btn_start.config(highlightthickness=0, bd=0, fg="white", bg="#f44336",\n activebackground="#ff7043", activeforeground="white", font=fontRoboto)\n btn_start.pack(padx="10", pady="10", fill=BOTH)\n \n def start(self):\n # Create folder to store images\n directory = "Screenshots"\n self.new_folder = directory + "/" + datetime.now().strftime("%Y_%m_%d-%I_%M_%p")\n\n # all images to one folder\n if not os.path.exists(directory):\n os.makedirs(directory)\n\n # new folder for storing images for current session\n if not os.path.exists(self.new_folder):\n os.makedirs(self.new_folder)\n\n # Run ScreenCords.py and get cordinates\n cords_point = subprocess.check_output([sys.executable, "GetScreenCoordinates.py", "-l"])\n cord_tuple = tuple(cords_point.decode("utf-8").rstrip().split(","))\n\n # cordinates for screenshots and compare\n self.cords = (int(cord_tuple[0]), int(cord_tuple[1]), int(cord_tuple[2]), int(cord_tuple[3]))\n\n # save first image\n img1 = grab(bbox=self.cords)\n now = datetime.now().strftime("%Y_%m_%d-%I_%M_%S_%p")\n fname = self.new_folder + "/ScreenShots" + now + ".png"\n img1.save(fname)\n print("First Screenshot taken")\n\n # start taking screenshot of next images\n self.take_screenshots() \n\n def take_screenshots(self):\n # grab first and second image\n img1 = grab(bbox=self.cords)\n time.sleep(1)\n img2 = grab(bbox=self.cords)\n\n # check difference between images\n diff = ImageChops.difference(img1, img2)\n bbox = diff.getbbox()\n \n if bbox is not None:\n now = datetime.now().strftime("%Y_%m_%d-%I_%M_%S_%p")\n fname = self.new_folder + "/ScreenShots" + now + ".png"\n \n img2.save(fname)\n print("Screenshot taken")\n\n root.after(5, self.take_screenshots)\n\n def close(self):\n quit()\n\nif __name__ == "__main__": \n root = Tk()\n gui = AutoScreenshot(root)\n root.mainloop()\n32.8333331010.616602
06aed847e420c882fffa9edfe88238102ee06ac092749pyPythonrqalpha/utils/logger.pyHaidongHe/rqalphabb824178425909e051c456f6062a6c5bdc816421Apache-2.012020-11-10T05:44:39.000Z2020-11-10T05:44:39.000Zrqalpha/utils/logger.pyHaidongHe/rqalphabb824178425909e051c456f6062a6c5bdc816421Apache-2.0NoneNoneNonerqalpha/utils/logger.pyHaidongHe/rqalphabb824178425909e051c456f6062a6c5bdc816421Apache-2.012020-03-05T05:06:45.000Z2020-03-05T05:06:45.000Z# -*- coding: utf-8 -*-\n# 版权所有 2019 深圳米筐科技有限公司(下称“米筐科技”)\n#\n# 除非遵守当前许可,否则不得使用本软件。\n#\n# * 非商业用途(非商业用途指个人出于非商业目的使用本软件,或者高校、研究所等非营利机构出于教育、科研等目的使用本软件):\n# 遵守 Apache License 2.0(下称“Apache 2.0 许可”),您可以在以下位置获得 Apache 2.0 许可的副本:http://www.apache.org/licenses/LICENSE-2.0。\n# 除非法律有要求或以书面形式达成协议,否则本软件分发时需保持当前许可“原样”不变,且不得附加任何条件。\n#\n# * 商业用途(商业用途指个人出于任何商业目的使用本软件,或者法人或其他组织出于任何目的使用本软件):\n# 未经米筐科技授权,任何个人不得出于任何商业目的使用本软件(包括但不限于向第三方提供、销售、出租、出借、转让本软件、本软件的衍生产品、引用或借鉴了本软件功能或源代码的产品或服务),任何法人或其他组织不得出于任何目的使用本软件,否则米筐科技有权追究相应的知识产权侵权责任。\n# 在此前提下,对本软件的使用同样需要遵守 Apache 2.0 许可,Apache 2.0 许可与本许可冲突之处,以本许可为准。\n# 详细的授权流程,请联系 public@ricequant.com 获取。\n\nfrom datetime import datetime\nimport logbook\nfrom logbook import Logger, StderrHandler\n\nfrom rqalpha.utils.py2 import to_utf8\n\nlogbook.set_datetime_format("local")\n\n\n# patch warn\nlogbook.base._level_names[logbook.base.WARNING] = 'WARN'\n\n\n__all__ = [\n "user_log",\n "system_log",\n "user_system_log",\n]\n\n\nDATETIME_FORMAT = "%Y-%m-%d %H:%M:%S.%f"\n\n\ndef user_std_handler_log_formatter(record, handler):\n from rqalpha.environment import Environment\n try:\n dt = Environment.get_instance().calendar_dt.strftime(DATETIME_FORMAT)\n except Exception:\n dt = datetime.now().strftime(DATETIME_FORMAT)\n\n log = "{dt} {level} {msg}".format(\n dt=dt,\n level=record.level_name,\n msg=to_utf8(record.message),\n )\n return log\n\n\nuser_std_handler = StderrHandler(bubble=True)\nuser_std_handler.formatter = user_std_handler_log_formatter\n\n\ndef formatter_builder(tag):\n def formatter(record, handler):\n\n log = "[{formatter_tag}] [{time}] {level}: {msg}".format(\n formatter_tag=tag,\n level=record.level_name,\n msg=to_utf8(record.message),\n time=record.time,\n )\n\n if record.formatted_exception:\n log += "\n" + record.formatted_exception\n return log\n return formatter\n\n\n# loggers\n# 用户代码logger日志\nuser_log = Logger("user_log")\n# 给用户看的系统日志\nuser_system_log = Logger("user_system_log")\n\n# 用于用户异常的详细日志打印\nuser_detail_log = Logger("user_detail_log")\n# user_detail_log.handlers.append(StderrHandler(bubble=True))\n\n# 系统日志\nsystem_log = Logger("system_log")\nbasic_system_log = Logger("basic_system_log")\n\n# 标准输出日志\nstd_log = Logger("std_log")\n\n\ndef init_logger():\n system_log.handlers = [StderrHandler(bubble=True)]\n basic_system_log.handlers = [StderrHandler(bubble=True)]\n std_log.handlers = [StderrHandler(bubble=True)]\n user_log.handlers = []\n user_system_log.handlers = []\n\n\ndef user_print(*args, **kwargs):\n sep = kwargs.get("sep", " ")\n end = kwargs.get("end", "")\n\n message = sep.join(map(str, args)) + end\n\n user_log.info(message)\n\n\ninit_logger()\n25.2201831440.694434
06aee73a3b8946a07512f9eca678734d10d6715605517pyPythonsalt/modules/oracle.pywikimedia/operations-debs-saltbe6342abc7401ff92f67ed59f7834f1359f35314Apache-2.0NoneNoneNonesalt/modules/oracle.pywikimedia/operations-debs-saltbe6342abc7401ff92f67ed59f7834f1359f35314Apache-2.0NoneNoneNonesalt/modules/oracle.pywikimedia/operations-debs-saltbe6342abc7401ff92f67ed59f7834f1359f35314Apache-2.0NoneNoneNone# -*- coding: utf-8 -*-\n'''\nOracle DataBase connection module\n\n:mainteiner: Vladimir Bormotov <bormotov@gmail.com>\n\n:maturity: new\n\n:depends: cx_Oracle\n\n:platform: all\n\n:configuration: module provide connections for multiple Oracle DB instances.\n\n **OS Environment**\n\n .. code-block:: text\n\n ORACLE_HOME: path to oracle product\n PATH: path to Oracle Client libs need to be in PATH\n\n **pillar**\n\n .. code-block:: text\n\n oracle.dbs: list of known based\n oracle.dbs.<db>.uri: connection credentials in format:\n user/password@host[:port]/sid[ as {sysdba|sysoper}]\n'''\n\nimport os\nimport logging\nfrom salt.utils.decorators import depends\n\nlog = logging.getLogger(__name__)\n\ntry:\n import cx_Oracle\n MODE = {\n 'sysdba': cx_Oracle.SYSDBA,\n 'sysoper': cx_Oracle.SYSOPER\n }\n HAS_CX_ORACLE = True\nexcept ImportError:\n MODE = {'sysdba': 2, 'sysoper': 4}\n HAS_CX_ORACLE = False\n\n__virtualname__ = 'oracle'\n\n\ndef __virtual__():\n '''\n Load module only if cx_Oracle installed\n '''\n return __virtualname__ if HAS_CX_ORACLE else False\n\n\ndef _cx_oracle_req():\n '''\n Fallback function stub\n '''\n return 'Need "cx_Oracle" and Oracle Client installed for this functin exist'\n\n\ndef _unicode_output(cursor, name, default_type, size, precision, scale):\n '''\n Return strings values as python unicode string\n\n http://www.oracle.com/technetwork/articles/dsl/tuininga-cx-oracle-084866.html\n '''\n if default_type in (cx_Oracle.STRING, cx_Oracle.LONG_STRING,\n cx_Oracle.FIXED_CHAR, cx_Oracle.CLOB):\n return cursor.var(unicode, size, cursor.arraysize)\n\n\ndef _connect(uri):\n '''\n uri = user/password@host[:port]/sid[ as {sysdba|sysoper}]\n\n Return cx_Oracle.Connection instance\n '''\n # cx_Oracle.Connection() not support 'as sysdba' syntax\n uri_l = uri.rsplit(' as ', 1)\n if len(uri_l) == 2:\n credentials, mode = uri_l\n mode = MODE[mode]\n else:\n credentials = uri_l[0]\n mode = 0\n userpass, hostportsid = credentials.split('@')\n user, password = userpass.split('/')\n hostport, sid = hostportsid.split('/')\n hostport_l = hostport.split(':')\n if len(hostport_l) == 2:\n host, port = hostport_l\n else:\n host = hostport_l[0]\n port = 1521\n log.debug('connect: {0}'.format((user, password, host, port, sid, mode)))\n # force UTF-8 client encoding\n os.environ['NLS_LANG'] = '.AL32UTF8'\n conn = cx_Oracle.connect(user, password,\n cx_Oracle.makedsn(host, port, sid),\n mode)\n conn.outputtypehandler = _unicode_output\n return conn\n\n\n@depends('cx_Oracle', fallback_function=_cx_oracle_req)\ndef run_query(db, query):\n '''\n Run SQL query and return result\n\n CLI example:\n\n .. code-block:: bash\n\n salt '*' oracle.run_query my_db "select * from my_table"\n '''\n log.debug('run query on {0}: {1}'.format(db, query))\n conn = _connect(show_dbs(db)[db]['uri'])\n return conn.cursor().execute(query).fetchall()\n\n\ndef show_dbs(*dbs):\n '''\n Show databases configuration from pillar. Filter by args\n\n .. code-block:: bash\n\n salt '*' oracle.show_dbs\n salt '*' oracle.show_dbs my_db\n '''\n if dbs:\n log.debug('get dbs from pillar: {0}'.format(dbs))\n result = {}\n for db in dbs:\n result[db] = __salt__['pillar.get']('oracle:dbs:' + db)\n return result\n else:\n pillar_dbs = __salt__['pillar.get']('oracle:dbs')\n log.debug('get all ({0}) dbs from pillar'.format(len(pillar_dbs)))\n return pillar_dbs\n\n\n@depends('cx_Oracle', fallback_function=_cx_oracle_req)\ndef version(*dbs):\n '''\n Server Version (select banner from v$version)\n\n CLI Example:\n\n .. code-block:: bash\n\n salt '*' oracle.version\n salt '*' oracle.version my_db\n '''\n pillar_dbs = __salt__['pillar.get']('oracle:dbs')\n get_version = lambda x: [\n r[0] for r in run_query(x, "select banner from v$version order by banner")\n ]\n result = {}\n if dbs:\n log.debug('get db versions for: {0}'.format(dbs))\n for db in dbs:\n if db in pillar_dbs:\n result[db] = get_version(db)\n else:\n log.debug('get all({0}) dbs versions'.format(len(dbs)))\n for db in dbs:\n result[db] = get_version(db)\n return result\n\n\n@depends('cx_Oracle', fallback_function=_cx_oracle_req)\ndef client_version():\n '''\n Oracle Client Version\n\n CLI Example:\n\n .. code-block:: bash\n\n salt '*' oracle.client_version\n '''\n return '.'.join((str(x) for x in cx_Oracle.clientversion()))\n\n\ndef show_pillar(item=None):\n '''\n Show Pillar segment oracle.* and subitem with notation "item:subitem"\n\n CLI Example:\n\n .. code-block:: bash\n\n salt '*' oracle.show_pillar\n salt '*' oracle.show_pillar dbs:my_db\n '''\n if item:\n return __salt__['pillar.get']('oracle:' + item)\n else:\n return __salt__['pillar.get']('oracle')\n\n\ndef show_env():\n '''\n Show Environment used by Oracle Client\n\n CLI Example:\n\n .. code-block:: bash\n\n salt '*' oracle.show_env\n\n .. note::\n at first _connect() ``NLS_LANG`` will forced to '.AL32UTF8'\n '''\n envs = ['PATH', 'ORACLE_HOME', 'TNS_ADMIN', 'NLS_LANG']\n result = {}\n for env in envs:\n if env in os.environ:\n result[env] = os.environ[env]\n return result\n24.520000820.610839
06aef1e728fe8745d27da0badcde01e88381bd9b332785pyPythontests/test_std.pyashwini-balnaves/python-consul4ddec9b57eb5284b58967ce1a9b2422519f88cc2MIT4692015-01-02T18:36:39.000Z2022-03-10T09:18:13.000Ztests/test_std.pyashwini-balnaves/python-consul4ddec9b57eb5284b58967ce1a9b2422519f88cc2MIT2492015-01-21T19:06:34.000Z2022-01-12T09:12:58.000Ztests/test_std.pyashwini-balnaves/python-consul4ddec9b57eb5284b58967ce1a9b2422519f88cc2MIT2792015-01-17T04:25:04.000Z2022-03-11T22:06:46.000Zimport base64\nimport operator\nimport struct\nimport time\n\nimport pytest\nimport six\n\nimport consul\nimport consul.std\n\n\nCheck = consul.Check\n\n\nclass TestHTTPClient(object):\n def test_uri(self):\n http = consul.std.HTTPClient()\n assert http.uri('/v1/kv') == 'http://127.0.0.1:8500/v1/kv'\n assert http.uri('/v1/kv', params={'index': 1}) == \\n 'http://127.0.0.1:8500/v1/kv?index=1'\n\n\nclass TestConsul(object):\n def test_kv(self, consul_port):\n c = consul.Consul(port=consul_port)\n index, data = c.kv.get('foo')\n assert data is None\n assert c.kv.put('foo', 'bar') is True\n index, data = c.kv.get('foo')\n assert data['Value'] == six.b('bar')\n\n def test_kv_wait(self, consul_port):\n c = consul.Consul(port=consul_port)\n assert c.kv.put('foo', 'bar') is True\n index, data = c.kv.get('foo')\n check, data = c.kv.get('foo', index=index, wait='20ms')\n assert index == check\n\n def test_kv_encoding(self, consul_port):\n c = consul.Consul(port=consul_port)\n\n # test binary\n c.kv.put('foo', struct.pack('i', 1000))\n index, data = c.kv.get('foo')\n assert struct.unpack('i', data['Value']) == (1000,)\n\n # test unicode\n c.kv.put('foo', u'bar')\n index, data = c.kv.get('foo')\n assert data['Value'] == six.b('bar')\n\n # test empty-string comes back as `None`\n c.kv.put('foo', '')\n index, data = c.kv.get('foo')\n assert data['Value'] is None\n\n # test None\n c.kv.put('foo', None)\n index, data = c.kv.get('foo')\n assert data['Value'] is None\n\n # check unencoded values raises assert\n pytest.raises(AssertionError, c.kv.put, 'foo', {1: 2})\n\n def test_kv_put_cas(self, consul_port):\n c = consul.Consul(port=consul_port)\n assert c.kv.put('foo', 'bar', cas=50) is False\n assert c.kv.put('foo', 'bar', cas=0) is True\n index, data = c.kv.get('foo')\n\n assert c.kv.put('foo', 'bar2', cas=data['ModifyIndex']-1) is False\n assert c.kv.put('foo', 'bar2', cas=data['ModifyIndex']) is True\n index, data = c.kv.get('foo')\n assert data['Value'] == six.b('bar2')\n\n def test_kv_put_flags(self, consul_port):\n c = consul.Consul(port=consul_port)\n c.kv.put('foo', 'bar')\n index, data = c.kv.get('foo')\n assert data['Flags'] == 0\n\n assert c.kv.put('foo', 'bar', flags=50) is True\n index, data = c.kv.get('foo')\n assert data['Flags'] == 50\n\n def test_kv_recurse(self, consul_port):\n c = consul.Consul(port=consul_port)\n index, data = c.kv.get('foo/', recurse=True)\n assert data is None\n\n c.kv.put('foo/', None)\n index, data = c.kv.get('foo/', recurse=True)\n assert len(data) == 1\n\n c.kv.put('foo/bar1', '1')\n c.kv.put('foo/bar2', '2')\n c.kv.put('foo/bar3', '3')\n index, data = c.kv.get('foo/', recurse=True)\n assert [x['Key'] for x in data] == [\n 'foo/', 'foo/bar1', 'foo/bar2', 'foo/bar3']\n assert [x['Value'] for x in data] == [\n None, six.b('1'), six.b('2'), six.b('3')]\n\n def test_kv_delete(self, consul_port):\n c = consul.Consul(port=consul_port)\n c.kv.put('foo1', '1')\n c.kv.put('foo2', '2')\n c.kv.put('foo3', '3')\n index, data = c.kv.get('foo', recurse=True)\n assert [x['Key'] for x in data] == ['foo1', 'foo2', 'foo3']\n\n assert c.kv.delete('foo2') is True\n index, data = c.kv.get('foo', recurse=True)\n assert [x['Key'] for x in data] == ['foo1', 'foo3']\n assert c.kv.delete('foo', recurse=True) is True\n index, data = c.kv.get('foo', recurse=True)\n assert data is None\n\n def test_kv_delete_cas(self, consul_port):\n c = consul.Consul(port=consul_port)\n\n c.kv.put('foo', 'bar')\n index, data = c.kv.get('foo')\n\n assert c.kv.delete('foo', cas=data['ModifyIndex']-1) is False\n assert c.kv.get('foo') == (index, data)\n\n assert c.kv.delete('foo', cas=data['ModifyIndex']) is True\n index, data = c.kv.get('foo')\n assert data is None\n\n def test_kv_acquire_release(self, consul_port):\n c = consul.Consul(port=consul_port)\n\n pytest.raises(\n consul.ConsulException, c.kv.put, 'foo', 'bar', acquire='foo')\n\n s1 = c.session.create()\n s2 = c.session.create()\n\n assert c.kv.put('foo', '1', acquire=s1) is True\n assert c.kv.put('foo', '2', acquire=s2) is False\n assert c.kv.put('foo', '1', acquire=s1) is True\n assert c.kv.put('foo', '1', release='foo') is False\n assert c.kv.put('foo', '2', release=s2) is False\n assert c.kv.put('foo', '2', release=s1) is True\n\n c.session.destroy(s1)\n c.session.destroy(s2)\n\n def test_kv_keys_only(self, consul_port):\n c = consul.Consul(port=consul_port)\n\n assert c.kv.put('bar', '4') is True\n assert c.kv.put('base/foo', '1') is True\n assert c.kv.put('base/base/foo', '5') is True\n\n index, data = c.kv.get('base/', keys=True, separator='/')\n assert data == ['base/base/', 'base/foo']\n\n def test_transaction(self, consul_port):\n c = consul.Consul(port=consul_port)\n value = base64.b64encode(b"1").decode("utf8")\n d = {"KV": {"Verb": "set", "Key": "asdf", "Value": value}}\n r = c.txn.put([d])\n assert r["Errors"] is None\n\n d = {"KV": {"Verb": "get", "Key": "asdf"}}\n r = c.txn.put([d])\n assert r["Results"][0]["KV"]["Value"] == value\n\n def test_event(self, consul_port):\n c = consul.Consul(port=consul_port)\n\n assert c.event.fire("fooname", "foobody")\n index, events = c.event.list()\n assert [x['Name'] == 'fooname' for x in events]\n assert [x['Payload'] == 'foobody' for x in events]\n\n def test_event_targeted(self, consul_port):\n c = consul.Consul(port=consul_port)\n\n assert c.event.fire("fooname", "foobody")\n index, events = c.event.list(name="othername")\n assert events == []\n\n index, events = c.event.list(name="fooname")\n assert [x['Name'] == 'fooname' for x in events]\n assert [x['Payload'] == 'foobody' for x in events]\n\n def test_agent_checks(self, consul_port):\n c = consul.Consul(port=consul_port)\n\n def verify_and_dereg_check(check_id):\n assert set(c.agent.checks().keys()) == set([check_id])\n assert c.agent.check.deregister(check_id) is True\n assert set(c.agent.checks().keys()) == set([])\n\n def verify_check_status(check_id, status, notes=None):\n checks = c.agent.checks()\n assert checks[check_id]['Status'] == status\n if notes:\n assert checks[check_id]['Output'] == notes\n\n # test setting notes on a check\n c.agent.check.register('check', Check.ttl('1s'), notes='foo')\n assert c.agent.checks()['check']['Notes'] == 'foo'\n c.agent.check.deregister('check')\n\n assert set(c.agent.checks().keys()) == set([])\n assert c.agent.check.register(\n 'script_check', Check.script('/bin/true', 10)) is True\n verify_and_dereg_check('script_check')\n\n assert c.agent.check.register(\n 'check name',\n Check.script('/bin/true', 10),\n check_id='check_id') is True\n verify_and_dereg_check('check_id')\n\n http_addr = "http://127.0.0.1:{0}".format(consul_port)\n assert c.agent.check.register(\n 'http_check', Check.http(http_addr, '10ms')) is True\n time.sleep(1)\n verify_check_status('http_check', 'passing')\n verify_and_dereg_check('http_check')\n\n assert c.agent.check.register(\n 'http_timeout_check',\n Check.http(http_addr, '100ms', timeout='2s')) is True\n verify_and_dereg_check('http_timeout_check')\n\n assert c.agent.check.register('ttl_check', Check.ttl('100ms')) is True\n\n assert c.agent.check.ttl_warn('ttl_check') is True\n verify_check_status('ttl_check', 'warning')\n assert c.agent.check.ttl_warn(\n 'ttl_check', notes='its not quite right') is True\n verify_check_status('ttl_check', 'warning', 'its not quite right')\n\n assert c.agent.check.ttl_fail('ttl_check') is True\n verify_check_status('ttl_check', 'critical')\n assert c.agent.check.ttl_fail(\n 'ttl_check', notes='something went boink!') is True\n verify_check_status(\n 'ttl_check', 'critical', notes='something went boink!')\n\n assert c.agent.check.ttl_pass('ttl_check') is True\n verify_check_status('ttl_check', 'passing')\n assert c.agent.check.ttl_pass(\n 'ttl_check', notes='all hunky dory!') is True\n verify_check_status('ttl_check', 'passing', notes='all hunky dory!')\n # wait for ttl to expire\n time.sleep(120/1000.0)\n verify_check_status('ttl_check', 'critical')\n verify_and_dereg_check('ttl_check')\n\n def test_service_dereg_issue_156(self, consul_port):\n # https://github.com/cablehead/python-consul/issues/156\n service_name = 'app#127.0.0.1#3000'\n c = consul.Consul(port=consul_port)\n c.agent.service.register(service_name)\n\n time.sleep(80/1000.0)\n\n index, nodes = c.health.service(service_name)\n assert [node['Service']['ID'] for node in nodes] == [service_name]\n\n # Clean up tasks\n assert c.agent.service.deregister(service_name) is True\n\n time.sleep(40/1000.0)\n\n index, nodes = c.health.service(service_name)\n assert [node['Service']['ID'] for node in nodes] == []\n\n def test_agent_checks_service_id(self, consul_port):\n c = consul.Consul(port=consul_port)\n c.agent.service.register('foo1')\n\n time.sleep(40/1000.0)\n\n index, nodes = c.health.service('foo1')\n assert [node['Service']['ID'] for node in nodes] == ['foo1']\n\n c.agent.check.register('foo', Check.ttl('100ms'), service_id='foo1')\n\n time.sleep(40/1000.0)\n\n index, nodes = c.health.service('foo1')\n assert set([\n check['ServiceID'] for node in nodes\n for check in node['Checks']]) == set(['foo1', ''])\n assert set([\n check['CheckID'] for node in nodes\n for check in node['Checks']]) == set(['foo', 'serfHealth'])\n\n # Clean up tasks\n assert c.agent.check.deregister('foo') is True\n\n time.sleep(40/1000.0)\n\n assert c.agent.service.deregister('foo1') is True\n\n time.sleep(40/1000.0)\n\n def test_agent_register_check_no_service_id(self, consul_port):\n c = consul.Consul(port=consul_port)\n index, nodes = c.health.service("foo1")\n assert nodes == []\n\n pytest.raises(consul.std.base.ConsulException,\n c.agent.check.register,\n 'foo', Check.ttl('100ms'),\n service_id='foo1')\n\n time.sleep(40/1000.0)\n\n assert c.agent.checks() == {}\n\n # Cleanup tasks\n c.agent.check.deregister('foo')\n\n time.sleep(40/1000.0)\n\n def test_agent_register_enable_tag_override(self, consul_port):\n c = consul.Consul(port=consul_port)\n index, nodes = c.health.service("foo1")\n assert nodes == []\n\n c.agent.service.register('foo', enable_tag_override=True)\n\n assert c.agent.services()['foo']['EnableTagOverride']\n # Cleanup tasks\n c.agent.check.deregister('foo')\n\n def test_agent_service_maintenance(self, consul_port):\n c = consul.Consul(port=consul_port)\n\n c.agent.service.register('foo', check=Check.ttl('100ms'))\n\n time.sleep(40/1000.0)\n\n c.agent.service.maintenance('foo', 'true', "test")\n\n time.sleep(40/1000.0)\n\n checks_pre = c.agent.checks()\n assert '_service_maintenance:foo' in checks_pre.keys()\n assert 'test' == checks_pre['_service_maintenance:foo']['Notes']\n\n c.agent.service.maintenance('foo', 'false')\n\n time.sleep(40/1000.0)\n\n checks_post = c.agent.checks()\n assert '_service_maintenance:foo' not in checks_post.keys()\n\n # Cleanup\n c.agent.service.deregister('foo')\n\n time.sleep(40/1000.0)\n\n def test_agent_node_maintenance(self, consul_port):\n c = consul.Consul(port=consul_port)\n\n c.agent.maintenance('true', "test")\n\n time.sleep(40/1000.0)\n\n checks_pre = c.agent.checks()\n assert '_node_maintenance' in checks_pre.keys()\n assert 'test' == checks_pre['_node_maintenance']['Notes']\n\n c.agent.maintenance('false')\n\n time.sleep(40/1000.0)\n\n checks_post = c.agent.checks()\n assert '_node_maintenance' not in checks_post.keys()\n\n def test_agent_members(self, consul_port):\n c = consul.Consul(port=consul_port)\n members = c.agent.members()\n for x in members:\n assert x['Status'] == 1\n assert not x['Name'] is None\n assert not x['Tags'] is None\n assert c.agent.self()['Member'] in members\n\n wan_members = c.agent.members(wan=True)\n for x in wan_members:\n assert 'dc1' in x['Name']\n\n def test_agent_self(self, consul_port):\n c = consul.Consul(port=consul_port)\n assert set(c.agent.self().keys()) == set(['Member', 'Stats', 'Config',\n 'Coord', 'DebugConfig',\n 'Meta'])\n\n def test_agent_services(self, consul_port):\n c = consul.Consul(port=consul_port)\n assert c.agent.service.register('foo') is True\n assert set(c.agent.services().keys()) == set(['foo'])\n assert c.agent.service.deregister('foo') is True\n assert set(c.agent.services().keys()) == set()\n\n # test address param\n assert c.agent.service.register('foo', address='10.10.10.1') is True\n assert [\n v['Address'] for k, v in c.agent.services().items()\n if k == 'foo'][0] == '10.10.10.1'\n assert c.agent.service.deregister('foo') is True\n\n def test_catalog(self, consul_port):\n c = consul.Consul(port=consul_port)\n\n # grab the node our server created, so we can ignore it\n _, nodes = c.catalog.nodes()\n assert len(nodes) == 1\n current = nodes[0]\n\n # test catalog.datacenters\n assert c.catalog.datacenters() == ['dc1']\n\n # test catalog.register\n pytest.raises(\n consul.ConsulException,\n c.catalog.register, 'foo', '10.1.10.11', dc='dc2')\n\n assert c.catalog.register(\n 'n1',\n '10.1.10.11',\n service={'service': 's1'},\n check={'name': 'c1'}) is True\n assert c.catalog.register(\n 'n1', '10.1.10.11', service={'service': 's2'}) is True\n assert c.catalog.register(\n 'n2', '10.1.10.12',\n service={'service': 's1', 'tags': ['master']}) is True\n\n # test catalog.nodes\n pytest.raises(consul.ConsulException, c.catalog.nodes, dc='dc2')\n _, nodes = c.catalog.nodes()\n nodes.remove(current)\n assert [x['Node'] for x in nodes] == ['n1', 'n2']\n\n # test catalog.services\n pytest.raises(consul.ConsulException, c.catalog.services, dc='dc2')\n _, services = c.catalog.services()\n assert services == {'s1': [u'master'], 's2': [], 'consul': []}\n\n # test catalog.node\n pytest.raises(consul.ConsulException, c.catalog.node, 'n1', dc='dc2')\n _, node = c.catalog.node('n1')\n assert set(node['Services'].keys()) == set(['s1', 's2'])\n _, node = c.catalog.node('n3')\n assert node is None\n\n # test catalog.service\n pytest.raises(\n consul.ConsulException, c.catalog.service, 's1', dc='dc2')\n _, nodes = c.catalog.service('s1')\n assert set([x['Node'] for x in nodes]) == set(['n1', 'n2'])\n _, nodes = c.catalog.service('s1', tag='master')\n assert set([x['Node'] for x in nodes]) == set(['n2'])\n\n # test catalog.deregister\n pytest.raises(\n consul.ConsulException, c.catalog.deregister, 'n2', dc='dc2')\n assert c.catalog.deregister('n1', check_id='c1') is True\n assert c.catalog.deregister('n2', service_id='s1') is True\n # check the nodes weren't removed\n _, nodes = c.catalog.nodes()\n nodes.remove(current)\n assert [x['Node'] for x in nodes] == ['n1', 'n2']\n # check n2's s1 service was removed though\n _, nodes = c.catalog.service('s1')\n assert set([x['Node'] for x in nodes]) == set(['n1'])\n\n # cleanup\n assert c.catalog.deregister('n1') is True\n assert c.catalog.deregister('n2') is True\n _, nodes = c.catalog.nodes()\n nodes.remove(current)\n assert [x['Node'] for x in nodes] == []\n\n def test_health_service(self, consul_port):\n c = consul.Consul(port=consul_port)\n\n # check there are no nodes for the service 'foo'\n index, nodes = c.health.service('foo')\n assert nodes == []\n\n # register two nodes, one with a long ttl, the other shorter\n c.agent.service.register(\n 'foo',\n service_id='foo:1',\n check=Check.ttl('10s'),\n tags=['tag:foo:1'])\n c.agent.service.register(\n 'foo', service_id='foo:2', check=Check.ttl('100ms'))\n\n time.sleep(40/1000.0)\n\n # check the nodes show for the /health/service endpoint\n index, nodes = c.health.service('foo')\n assert [node['Service']['ID'] for node in nodes] == ['foo:1', 'foo:2']\n\n # but that they aren't passing their health check\n index, nodes = c.health.service('foo', passing=True)\n assert nodes == []\n\n # ping the two node's health check\n c.agent.check.ttl_pass('service:foo:1')\n c.agent.check.ttl_pass('service:foo:2')\n\n time.sleep(40/1000.0)\n\n # both nodes are now available\n index, nodes = c.health.service('foo', passing=True)\n assert [node['Service']['ID'] for node in nodes] == ['foo:1', 'foo:2']\n\n # wait until the short ttl node fails\n time.sleep(120/1000.0)\n\n # only one node available\n index, nodes = c.health.service('foo', passing=True)\n assert [node['Service']['ID'] for node in nodes] == ['foo:1']\n\n # ping the failed node's health check\n c.agent.check.ttl_pass('service:foo:2')\n\n time.sleep(40/1000.0)\n\n # check both nodes are available\n index, nodes = c.health.service('foo', passing=True)\n assert [node['Service']['ID'] for node in nodes] == ['foo:1', 'foo:2']\n\n # check that tag works\n index, nodes = c.health.service('foo', tag='tag:foo:1')\n assert [node['Service']['ID'] for node in nodes] == ['foo:1']\n\n # deregister the nodes\n c.agent.service.deregister('foo:1')\n c.agent.service.deregister('foo:2')\n\n time.sleep(40/1000.0)\n\n index, nodes = c.health.service('foo')\n assert nodes == []\n\n def test_health_state(self, consul_port):\n c = consul.Consul(port=consul_port)\n\n # The empty string is for the Serf Health Status check, which has an\n # empty ServiceID\n index, nodes = c.health.state('any')\n assert [node['ServiceID'] for node in nodes] == ['']\n\n # register two nodes, one with a long ttl, the other shorter\n c.agent.service.register(\n 'foo', service_id='foo:1', check=Check.ttl('10s'))\n c.agent.service.register(\n 'foo', service_id='foo:2', check=Check.ttl('100ms'))\n\n time.sleep(40/1000.0)\n\n # check the nodes show for the /health/state/any endpoint\n index, nodes = c.health.state('any')\n assert set([node['ServiceID'] for node in nodes]) == set(\n ['', 'foo:1', 'foo:2'])\n\n # but that they aren't passing their health check\n index, nodes = c.health.state('passing')\n assert [node['ServiceID'] for node in nodes] != 'foo'\n\n # ping the two node's health check\n c.agent.check.ttl_pass('service:foo:1')\n c.agent.check.ttl_pass('service:foo:2')\n\n time.sleep(40/1000.0)\n\n # both nodes are now available\n index, nodes = c.health.state('passing')\n assert set([node['ServiceID'] for node in nodes]) == set(\n ['', 'foo:1', 'foo:2'])\n\n # wait until the short ttl node fails\n time.sleep(2200/1000.0)\n\n # only one node available\n index, nodes = c.health.state('passing')\n assert set([node['ServiceID'] for node in nodes]) == set(\n ['', 'foo:1'])\n\n # ping the failed node's health check\n c.agent.check.ttl_pass('service:foo:2')\n\n time.sleep(40/1000.0)\n\n # check both nodes are available\n index, nodes = c.health.state('passing')\n assert set([node['ServiceID'] for node in nodes]) == set(\n ['', 'foo:1', 'foo:2'])\n\n # deregister the nodes\n c.agent.service.deregister('foo:1')\n c.agent.service.deregister('foo:2')\n\n time.sleep(40/1000.0)\n\n index, nodes = c.health.state('any')\n assert [node['ServiceID'] for node in nodes] == ['']\n\n def test_health_node(self, consul_port):\n c = consul.Consul(port=consul_port)\n # grab local node name\n node = c.agent.self()['Config']['NodeName']\n index, checks = c.health.node(node)\n assert node in [check["Node"] for check in checks]\n\n def test_health_checks(self, consul_port):\n c = consul.Consul(port=consul_port)\n\n c.agent.service.register(\n 'foobar', service_id='foobar', check=Check.ttl('10s'))\n\n time.sleep(40/1000.00)\n\n index, checks = c.health.checks('foobar')\n\n assert [check['ServiceID'] for check in checks] == ['foobar']\n assert [check['CheckID'] for check in checks] == ['service:foobar']\n\n c.agent.service.deregister('foobar')\n\n time.sleep(40/1000.0)\n\n index, checks = c.health.checks('foobar')\n assert len(checks) == 0\n\n def test_session(self, consul_port):\n c = consul.Consul(port=consul_port)\n\n # session.create\n pytest.raises(consul.ConsulException, c.session.create, node='n2')\n pytest.raises(consul.ConsulException, c.session.create, dc='dc2')\n session_id = c.session.create('my-session')\n\n # session.list\n pytest.raises(consul.ConsulException, c.session.list, dc='dc2')\n _, sessions = c.session.list()\n assert [x['Name'] for x in sessions] == ['my-session']\n\n # session.info\n pytest.raises(\n consul.ConsulException, c.session.info, session_id, dc='dc2')\n index, session = c.session.info('1'*36)\n assert session is None\n index, session = c.session.info(session_id)\n assert session['Name'] == 'my-session'\n\n # session.node\n node = session['Node']\n pytest.raises(\n consul.ConsulException, c.session.node, node, dc='dc2')\n _, sessions = c.session.node(node)\n assert [x['Name'] for x in sessions] == ['my-session']\n\n # session.destroy\n pytest.raises(\n consul.ConsulException, c.session.destroy, session_id, dc='dc2')\n assert c.session.destroy(session_id) is True\n _, sessions = c.session.list()\n assert sessions == []\n\n def test_session_delete_ttl_renew(self, consul_port):\n c = consul.Consul(port=consul_port)\n\n s = c.session.create(behavior='delete', ttl=20)\n\n # attempt to renew an unknown session\n pytest.raises(consul.NotFound, c.session.renew, '1'*36)\n\n session = c.session.renew(s)\n assert session['Behavior'] == 'delete'\n assert session['TTL'] == '20s'\n\n # trying out the behavior\n assert c.kv.put('foo', '1', acquire=s) is True\n index, data = c.kv.get('foo')\n assert data['Value'] == six.b('1')\n\n c.session.destroy(s)\n index, data = c.kv.get('foo')\n assert data is None\n\n def test_acl_disabled(self, consul_port):\n c = consul.Consul(port=consul_port)\n pytest.raises(consul.ACLDisabled, c.acl.list)\n pytest.raises(consul.ACLDisabled, c.acl.info, '1'*36)\n pytest.raises(consul.ACLDisabled, c.acl.create)\n pytest.raises(consul.ACLDisabled, c.acl.update, 'foo')\n pytest.raises(consul.ACLDisabled, c.acl.clone, 'foo')\n pytest.raises(consul.ACLDisabled, c.acl.destroy, 'foo')\n\n def test_acl_permission_denied(self, acl_consul):\n c = consul.Consul(port=acl_consul.port)\n pytest.raises(consul.ACLPermissionDenied, c.acl.list)\n pytest.raises(consul.ACLPermissionDenied, c.acl.create)\n pytest.raises(consul.ACLPermissionDenied, c.acl.update, 'anonymous')\n pytest.raises(consul.ACLPermissionDenied, c.acl.clone, 'anonymous')\n pytest.raises(consul.ACLPermissionDenied, c.acl.destroy, 'anonymous')\n\n def test_acl_explict_token_use(self, acl_consul):\n c = consul.Consul(port=acl_consul.port)\n master_token = acl_consul.token\n\n acls = c.acl.list(token=master_token)\n assert set([x['ID'] for x in acls]) == \\n set(['anonymous', master_token])\n\n assert c.acl.info('1'*36) is None\n compare = [c.acl.info(master_token), c.acl.info('anonymous')]\n compare.sort(key=operator.itemgetter('ID'))\n assert acls == compare\n\n rules = """\n key "" {\n policy = "read"\n }\n key "private/" {\n policy = "deny"\n }\n service "foo-" {\n policy = "write"\n }\n service "bar-" {\n policy = "read"\n }\n """\n\n token = c.acl.create(rules=rules, token=master_token)\n assert c.acl.info(token)['Rules'] == rules\n\n token2 = c.acl.clone(token, token=master_token)\n assert c.acl.info(token2)['Rules'] == rules\n\n assert c.acl.update(token2, name='Foo', token=master_token) == token2\n assert c.acl.info(token2)['Name'] == 'Foo'\n\n assert c.acl.destroy(token2, token=master_token) is True\n assert c.acl.info(token2) is None\n\n c.kv.put('foo', 'bar')\n c.kv.put('private/foo', 'bar')\n\n assert c.kv.get('foo', token=token)[1]['Value'] == six.b('bar')\n pytest.raises(\n consul.ACLPermissionDenied, c.kv.put, 'foo', 'bar2', token=token)\n pytest.raises(\n consul.ACLPermissionDenied, c.kv.delete, 'foo', token=token)\n\n assert c.kv.get('private/foo')[1]['Value'] == six.b('bar')\n pytest.raises(\n consul.ACLPermissionDenied,\n c.kv.get, 'private/foo', token=token)\n pytest.raises(\n consul.ACLPermissionDenied,\n c.kv.put, 'private/foo', 'bar2', token=token)\n pytest.raises(\n consul.ACLPermissionDenied,\n c.kv.delete, 'private/foo', token=token)\n\n # test token pass through for service registration\n pytest.raises(\n consul.ACLPermissionDenied,\n c.agent.service.register, "bar-1", token=token)\n c.agent.service.register("foo-1", token=token)\n index, data = c.health.service('foo-1', token=token)\n assert data[0]['Service']['ID'] == "foo-1"\n index, data = c.health.checks('foo-1', token=token)\n assert data == []\n index, data = c.health.service('bar-1', token=token)\n assert not data\n\n # clean up\n assert c.agent.service.deregister('foo-1') is True\n c.acl.destroy(token, token=master_token)\n acls = c.acl.list(token=master_token)\n assert set([x['ID'] for x in acls]) == \\n set(['anonymous', master_token])\n\n def test_acl_implicit_token_use(self, acl_consul):\n # configure client to use the master token by default\n c = consul.Consul(port=acl_consul.port, token=acl_consul.token)\n master_token = acl_consul.token\n\n acls = c.acl.list()\n assert set([x['ID'] for x in acls]) == \\n set(['anonymous', master_token])\n\n assert c.acl.info('foo') is None\n compare = [c.acl.info(master_token), c.acl.info('anonymous')]\n compare.sort(key=operator.itemgetter('ID'))\n assert acls == compare\n\n rules = """\n key "" {\n policy = "read"\n }\n key "private/" {\n policy = "deny"\n }\n """\n token = c.acl.create(rules=rules)\n assert c.acl.info(token)['Rules'] == rules\n\n token2 = c.acl.clone(token)\n assert c.acl.info(token2)['Rules'] == rules\n\n assert c.acl.update(token2, name='Foo') == token2\n assert c.acl.info(token2)['Name'] == 'Foo'\n\n assert c.acl.destroy(token2) is True\n assert c.acl.info(token2) is None\n\n c.kv.put('foo', 'bar')\n c.kv.put('private/foo', 'bar')\n\n c_limited = consul.Consul(port=acl_consul.port, token=token)\n assert c_limited.kv.get('foo')[1]['Value'] == six.b('bar')\n pytest.raises(\n consul.ACLPermissionDenied, c_limited.kv.put, 'foo', 'bar2')\n pytest.raises(\n consul.ACLPermissionDenied, c_limited.kv.delete, 'foo')\n\n assert c.kv.get('private/foo')[1]['Value'] == six.b('bar')\n pytest.raises(\n consul.ACLPermissionDenied,\n c_limited.kv.get, 'private/foo')\n pytest.raises(\n consul.ACLPermissionDenied,\n c_limited.kv.put, 'private/foo', 'bar2')\n pytest.raises(\n consul.ACLPermissionDenied,\n c_limited.kv.delete, 'private/foo')\n\n # check we can override the client's default token\n pytest.raises(\n consul.ACLPermissionDenied,\n c.kv.get, 'private/foo', token=token\n )\n pytest.raises(\n consul.ACLPermissionDenied,\n c.kv.put, 'private/foo', 'bar2', token=token)\n pytest.raises(\n consul.ACLPermissionDenied,\n c.kv.delete, 'private/foo', token=token)\n\n # clean up\n c.acl.destroy(token)\n acls = c.acl.list()\n assert set([x['ID'] for x in acls]) == \\n set(['anonymous', master_token])\n\n def test_status_leader(self, consul_port):\n c = consul.Consul(port=consul_port)\n\n agent_self = c.agent.self()\n leader = c.status.leader()\n addr_port = agent_self['Stats']['consul']['leader_addr']\n\n assert leader == addr_port, \\n "Leader value was {0}, expected value " \\n "was {1}".format(leader, addr_port)\n\n def test_status_peers(self, consul_port):\n\n c = consul.Consul(port=consul_port)\n\n agent_self = c.agent.self()\n\n addr_port = agent_self['Stats']['consul']['leader_addr']\n peers = c.status.peers()\n\n assert addr_port in peers, \\n "Expected value '{0}' " \\n "in peer list but it was not present".format(addr_port)\n\n def test_query(self, consul_port):\n c = consul.Consul(port=consul_port)\n\n # check that query list is empty\n queries = c.query.list()\n assert queries == []\n\n # create a new named query\n query_service = 'foo'\n query_name = 'fooquery'\n query = c.query.create(query_service, query_name)\n\n # assert response contains query ID\n assert 'ID' in query \\n and query['ID'] is not None \\n and str(query['ID']) != ''\n\n # retrieve query using id and name\n queries = c.query.get(query['ID'])\n assert queries != [] \\n and len(queries) == 1\n assert queries[0]['Name'] == query_name \\n and queries[0]['ID'] == query['ID']\n\n # explain query\n assert c.query.explain(query_name)['Query']\n\n # delete query\n assert c.query.delete(query['ID'])\n\n def test_coordinate(self, consul_port):\n c = consul.Consul(port=consul_port)\n c.coordinate.nodes()\n c.coordinate.datacenters()\n assert set(c.coordinate.datacenters()[0].keys()) == \\n set(['Datacenter', 'Coordinates', 'AreaID'])\n\n def test_operator(self, consul_port):\n c = consul.Consul(port=consul_port)\n config = c.operator.raft_config()\n assert config["Index"] == 1\n leader = False\n voter = False\n for server in config["Servers"]:\n if server["Leader"]:\n leader = True\n if server["Voter"]:\n voter = True\n assert leader\n assert voter\n34.766702780.574257